Some Philosophical-ish Inquiries about Interpretation and Understanding

We try to interpret machine learning models to make them more human understandable. For example, we have feature visualization to produce human recognizable objects and human understandable concepts for deep image models. However, numerous feasible interpretations are possible. Which interpretation makes more sense? Is interpretation equivalent to the underlying working mechanism, whatever that might be? What if the so-called ‘true’ mechanism of a model is itself not human understandable? These inquiries can be both intriguing and frustrating. After some thought, it occurs to me that these are all legitimate questions to understand everything or anything at all in general. It’s a phisophical argument for truth and there are plenty of lessons to take from the development of science. We are not seeking for true understanding, whatever that is. We are pursuing understandings that can effectively guide us within a context.

Read More

Representations

Languages, mathematical notations, graphics are all representations, which allows development of complex ideas.

Read More