About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Communications and Networking. 15th EAI International Conference, ChinaCom 2020, Shanghai, China, November 20-21, 2020, Proceedings

Research Article

Efficient Architecture for Convolution and Softmax Function in Deep Learning Accelerator

Download(Requires a free EAI acccount)
2 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-67720-6_43,
        author={Zhenyu Jiang and Zhifeng Zhang and Haoqi Ren and Jun Wu},
        title={Efficient Architecture for Convolution and Softmax Function in Deep Learning Accelerator},
        proceedings={Communications and Networking. 15th EAI International Conference, ChinaCom 2020, Shanghai, China, November 20-21, 2020,  Proceedings},
        proceedings_a={CHINACOM},
        year={2021},
        month={2},
        keywords={Convolutional neural network Hardware architecture Convolution Winograd algorithm Softmax function},
        doi={10.1007/978-3-030-67720-6_43}
    }
    
  • Zhenyu Jiang
    Zhifeng Zhang
    Haoqi Ren
    Jun Wu
    Year: 2021
    Efficient Architecture for Convolution and Softmax Function in Deep Learning Accelerator
    CHINACOM
    Springer
    DOI: 10.1007/978-3-030-67720-6_43
Zhenyu Jiang1,*, Zhifeng Zhang1, Haoqi Ren1, Jun Wu2
  • 1: College of Electronic and Information Engineering
  • 2: School of Computer Science
*Contact email: 1832925@tongji.edu.cn

Abstract

Convolutional neural network (CNN) has been widely used in deep learning. However, thehardwareconsumptionof the convolutional neural networkis very large. Traditional Central Processing Units (CPUs) and Graphic Processing Units (GPUs) are inefficient and expensive for neural network, so an efficient hardware design is required. The proposed design based on Digital Signal Processor (DSP) has rapid operating speed and strong computation ability for training and inference of CNN. In this paper, the hardware architecture of convolution and softmax function is specially optimized. Winograd algorithm can reduce multiplications of convolution, thus decreases hardware complexity, since multiplication is much more complex in hardware implementation than addition. The softmax function is also simplified by replacing divider by subtractor and logarithmic function which cost fewer resources. The proposed hardware architecture dramatically decreases the complexity and hardware resources.

Keywords
Convolutional neural network Hardware architecture Convolution Winograd algorithm Softmax function
Published
2021-02-02
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-67720-6_43
Copyright © 2020–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL