Understanding Neural Networks through Representation Erasure Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery Improving the interpretability of deep neural networks with stimulated learning HILK++: an interpretability-guided fuzzy modeling methodology for learning readable and comprehensible fuzzy rule-based classifiers Visualizing the hidden activity of artificial neural networks ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks Discovering internal representations from object-cnns using population encoding Axiomatic Attribution for Deep Networks Detecting statistical interactions from neural network weights Decoding the Deep: Exploring class hierarchies of deep representations using multiresolution matrix factorization Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks Explaining nonlinear classification decisions with deep taylor decomposition Every Filter Extracts A Specific Texture In Convolutional Neural Networks ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation Generating interpretable images with controllable structure Network Dissection: Quantifying Interpretability of Deep Visual Representations