About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Communications and Networking. 14th EAI International Conference, ChinaCom 2019, Shanghai, China, November 29 – December 1, 2019, Proceedings, Part II

Research Article

Layer-Wise Entropy Analysis and Visualization of Neurons Activation

Download(Requires a free EAI acccount)
3 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-41117-6_3,
        author={Longwei Wang and Peijie Chen and Chengfei Wang and Rui Wang},
        title={Layer-Wise Entropy Analysis and Visualization of Neurons Activation},
        proceedings={Communications and Networking. 14th EAI International Conference, ChinaCom 2019, Shanghai, China, November 29 -- December 1, 2019, Proceedings, Part II},
        proceedings_a={CHINACOM PART 2},
        year={2020},
        month={2},
        keywords={Entropy analysis Visualization Neurons activation},
        doi={10.1007/978-3-030-41117-6_3}
    }
    
  • Longwei Wang
    Peijie Chen
    Chengfei Wang
    Rui Wang
    Year: 2020
    Layer-Wise Entropy Analysis and Visualization of Neurons Activation
    CHINACOM PART 2
    Springer
    DOI: 10.1007/978-3-030-41117-6_3
Longwei Wang1,*, Peijie Chen1, Chengfei Wang1, Rui Wang2
  • 1: Department of Computer Science and Software Engineering
  • 2: Department of Information and Communications
*Contact email: lzw0070@auburn.edu

Abstract

Understanding the inner working mechanism of deep neural networks (DNNs) is essential and important for researchers to design and improve the performance of DNNs. In this work, the entropy analysis is leveraged to study the neurons activation behavior of the fully connected layers of DNNs. The entropy of the activation patterns of each layer can provide an efficient performance metric for the evaluation of the network model accuracy. The study is conducted based on a well trained network model. The activation patterns of shallow and deep layers of the fully connected layers are analyzed by inputting the images of a single class. It is found that for the well trained deep neural networks model, the entropy of the neuron activation pattern is monotonically reduced with the depth of the layers. That is, the neuron activation patterns become more and more stable with the depth of the fully connected layers. The entropy pattern of the fully connected layers can also provide guidelines as to how many fully connected layers are needed to guarantee the accuracy of the model. The study in this work provides a new perspective on the analysis of DNN, which shows some interesting results.

Keywords
Entropy analysis Visualization Neurons activation
Published
2020-02-27
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-41117-6_3
Copyright © 2019–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL