We try to interpret machine learning models to make them more human understandable. For example, we have feature visualization to produce human recognizable objects and human understandable concepts for deep image models. However, numerous feasible interpretations are possible. Which interpretation makes more sense? Is interpretation equivalent to the underlying working mechanism, whatever that might be? What if the so-called ‘true’ mechanism of a model is itself not human understandable? These inquiries can be intriguing and frustrating at the same time. After some thought, it occurs to me that these are all legitimate questions to understand everything or anything at all in general. It’s a phisophical argument for truth and there are plenty of lessons to take from the development of science. We are not seeking for true understanding, whatever that is. We are pursuing understandings that can effectively guide us within a context.
A Venn diagram of the relationship between vis and interpreting deep learning:
The visualization research community has published a number of papers on the topic of visualizing and understanding deep neural networks. This article provides an overview of these publications.
Languages, mathematical notations, graphics are all representations, which allows development of complex ideas.