Marco Ancona1, Enea Ceolini2, Cengiz Öztireli1, Markus Gross1
1Department of Computer Science, ETH Zurich, Switzerland
2Institute of Neuroinformatics, University Zürich and ETH Zürich, Switzerland
The problem of explaining complex machine learning models, including Deep Neural Networks, has gained increasing attention over the last few years. While several methods have been proposed to explain network predictions, the definition itself of explanation is still debated. Moreover, only a few attempts to compare explanation methods from a theoretical perspective has been done. In this chapter, we discuss the theoretical properties of several attribution methods and show how they share the same idea of using the gradient information as a descriptive factor for the functioning of a model. Finally, we discuss the strengths and limitations of these methods and compare them with available alternatives.
Links:
PDF