About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Computer Science and Education in Computer Science. 19th EAI International Conference, CSECS 2023, Boston, MA, USA, June 28–29, 2023, Proceedings

Research Article

Optimized FPGA Implementation of an Artificial Neural Network Using a Single Neuron

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-44668-9_19,
        author={Yassen Gorbounov and Hao Chen},
        title={Optimized FPGA Implementation of an Artificial Neural Network Using a Single Neuron},
        proceedings={Computer Science and Education in Computer Science. 19th EAI International Conference, CSECS 2023, Boston, MA, USA, June 28--29, 2023, Proceedings},
        proceedings_a={CSECS},
        year={2023},
        month={10},
        keywords={Artificial Neural Network Contextual Switching Hardware Acceleration FPGA Optimization},
        doi={10.1007/978-3-031-44668-9_19}
    }
    
  • Yassen Gorbounov
    Hao Chen
    Year: 2023
    Optimized FPGA Implementation of an Artificial Neural Network Using a Single Neuron
    CSECS
    Springer
    DOI: 10.1007/978-3-031-44668-9_19
Yassen Gorbounov,*, Hao Chen1
  • 1: China University of Mining and Technology
*Contact email: ygorbounov@nbu.bg

Abstract

Since its emergence in the early 1940s as a connectionist approximation of the functioning of neurons in the brain, artificial neural networks have undergone significant development. The trend of increasing complexity is steadily exponential and includes an ever-increasing variety of models. This is due on the one hand to the achievements in microelectronics, and on the other to the growing interest and development of the mathematical apparatus in the field of artificial intelligence. It can be assumed however that overcomplicating the structure of the artificial neural network is no guarantee of success. Following this reasoning, the paper proposes a continuation of the author’s previous research to create an optimized neural network designed for use on resource-constrained hardware. The new solution aims to present a design procedure for building neural networks using only a single hardware neuron by using context switching and time multiplexing by the aid of an FPGA device. This would lead to significant reduction in computational requirements and the possibility of creating small but very efficient artificial neural networks.

Keywords
Artificial Neural Network Contextual Switching Hardware Acceleration FPGA Optimization
Published
2023-10-11
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-44668-9_19
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL