On Network Science And Mutual Information For Explaining Deep Neural Networks
Brian Davis, Radu Marculescu, José Moura, Umang Bhatt, Kartikeya Bhardwaj
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:57
In this paper, we present a new approach to interpreting deep learning models. By coupling mutual information with network science, we explore how information flows through feedforward networks. We show that efficiently approximating mutual information allows us to create an information measure that quantifies how much information flows between any two neurons of a deep learning model. To that end, we propose NIF, Neural Information Flow, a technique for codifying information flow that exposes deep learning model internals and provides feature attributions.