The complexity of domain problem can slow or even hinder the learning process of neural networks. It is rather difficult to overcome such an obstacle because neural networks, as cited today in the literature, lack the interpretability of their internal structures. In this paper, we present a visualization approach capable of enhancing the understanding of neural networks. Our approach visualizes input and weight contributions, sensitivity analysis, and provides guidance in pruning less influential features and consequently reducing the complexity of domain problem while maintaining acceptable error rates. We conduct experiments on various datasets to show the effectiveness of our approach.
Key words: Neural network, visualization, input contribution, sensitivity analysis
Copyright © 2021 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0