
Research Article
Layer-Wise Entropy Analysis and Visualization of Neurons Activation
@INPROCEEDINGS{10.1007/978-3-030-41117-6_3, author={Longwei Wang and Peijie Chen and Chengfei Wang and Rui Wang}, title={Layer-Wise Entropy Analysis and Visualization of Neurons Activation}, proceedings={Communications and Networking. 14th EAI International Conference, ChinaCom 2019, Shanghai, China, November 29 -- December 1, 2019, Proceedings, Part II}, proceedings_a={CHINACOM PART 2}, year={2020}, month={2}, keywords={Entropy analysis Visualization Neurons activation}, doi={10.1007/978-3-030-41117-6_3} }
- Longwei Wang
Peijie Chen
Chengfei Wang
Rui Wang
Year: 2020
Layer-Wise Entropy Analysis and Visualization of Neurons Activation
CHINACOM PART 2
Springer
DOI: 10.1007/978-3-030-41117-6_3
Abstract
Understanding the inner working mechanism of deep neural networks (DNNs) is essential and important for researchers to design and improve the performance of DNNs. In this work, the entropy analysis is leveraged to study the neurons activation behavior of the fully connected layers of DNNs. The entropy of the activation patterns of each layer can provide an efficient performance metric for the evaluation of the network model accuracy. The study is conducted based on a well trained network model. The activation patterns of shallow and deep layers of the fully connected layers are analyzed by inputting the images of a single class. It is found that for the well trained deep neural networks model, the entropy of the neuron activation pattern is monotonically reduced with the depth of the layers. That is, the neuron activation patterns become more and more stable with the depth of the fully connected layers. The entropy pattern of the fully connected layers can also provide guidelines as to how many fully connected layers are needed to guarantee the accuracy of the model. The study in this work provides a new perspective on the analysis of DNN, which shows some interesting results.