About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Simulation Tools and Techniques. 12th EAI International Conference, SIMUtools 2020, Guiyang, China, August 28-29, 2020, Proceedings, Part I

Research Article

Dynamic Adaptive Search Strategy Based Incremental Extreme Learning Machine Based on

Download(Requires a free EAI acccount)
2 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-72792-5_44,
        author={Zuozhi Liu and Jianjun Jiao and Quan Yuan},
        title={Dynamic Adaptive Search Strategy Based Incremental Extreme Learning Machine Based on},
        proceedings={Simulation Tools and Techniques. 12th EAI International Conference, SIMUtools 2020, Guiyang, China, August 28-29, 2020, Proceedings, Part I},
        proceedings_a={SIMUTOOLS},
        year={2021},
        month={4},
        keywords={Single-hidden layer feedforward network Incremental extreme learning machine Enhanced grey wolf optimization Universal approximation},
        doi={10.1007/978-3-030-72792-5_44}
    }
    
  • Zuozhi Liu
    Jianjun Jiao
    Quan Yuan
    Year: 2021
    Dynamic Adaptive Search Strategy Based Incremental Extreme Learning Machine Based on
    SIMUTOOLS
    Springer
    DOI: 10.1007/978-3-030-72792-5_44
Zuozhi Liu1, Jianjun Jiao1, Quan Yuan2
  • 1: School of Mathematics and Statistics, Guizhou University of Finance and Economics
  • 2: Finance Department, Guizhou University of Finance and Economics

Abstract

Extreme learning machine (ELM) is a promising method for the learning of single-hidden layer feedforward network (SLFN) which is attractive for its simplicity and high efficiency. However, during the rapid development of ELM algorithm, the determination of suitable network architecture is still a challenging work. To deal with this issue, this work develops a modified ELM algorithm based on a novel adaptive optimization method. Specifically, we use the growth structure strategy to design the network architecture. During the learning process of the proposed algorithm, the grey wolf optimization (GWO) technique is then introduced to seek the optimal parameters for hidden nodes instead of random selection. In addition, to improve the convergence speed, we further ameliorate the traditional GWO approach. Experiment results over some benchmark applications indicate that our AI-ELM algorithm can dramatically reduce the scale of network and obtains the better generalization performance than other classical ELM algorithms.

Keywords
Single-hidden layer feedforward network Incremental extreme learning machine Enhanced grey wolf optimization Universal approximation
Published
2021-04-27
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-72792-5_44
Copyright © 2020–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL