AI models are inherently difficult to interpret


Deep learning has propelled the capabilities of AI forward (following the success of AlexNet in the ImageNet competition in 2012). However, deep learning models are inherently hard to interpret. A trained deep learning model consists of millions of numerical values called “weights”, which can’t be simply reduced to a solveable equation. This is often referred to as the “black-box” problem.

Various methods for interpreting and explaining AI models have been developed.