Machine Learning and Intelligent Communications. 4th International Conference, MLICOM 2019, Nanjing, China, August 24–25, 2019, Proceedings

Research Article

Batch Gradient Training Method with Smoothing Regularization for Echo State Networks

Download
107 downloads
  • @INPROCEEDINGS{10.1007/978-3-030-32388-2_42,
        author={Zohaib Ahmad and Kaizhe Nie and Junfei Qiao and Cuili Yang},
        title={Batch Gradient Training Method with Smoothing  Regularization for Echo State Networks},
        proceedings={Machine Learning and Intelligent Communications. 4th International Conference, MLICOM 2019, Nanjing, China, August 24--25, 2019, Proceedings},
        proceedings_a={MLICOM},
        year={2019},
        month={10},
        keywords={Each state networks Gradient method regularization Sparsity},
        doi={10.1007/978-3-030-32388-2_42}
    }
    
  • Zohaib Ahmad
    Kaizhe Nie
    Junfei Qiao
    Cuili Yang
    Year: 2019
    Batch Gradient Training Method with Smoothing Regularization for Echo State Networks
    MLICOM
    Springer
    DOI: 10.1007/978-3-030-32388-2_42
Zohaib Ahmad1,*, Kaizhe Nie1,*, Junfei Qiao1, Cuili Yang1
  • 1: Beijing University of Technology Beijing Key Laboratory of Computational Intelligence and Intelligence System
*Contact email: ahmedzohaib03@gmail.com, 1036685809@qq.com

Abstract

The echo state networks (ESNs) have been widely used for time series prediction, due to their excellent learning performance and fast convergence speed. However, the obtained output weight of ESN by pseudoinverse is always ill-posed. In order to solve this problem, the ESN with batch gradient method and smoothing regularization (ESN-BGSL0) is studied. By introducing a smooth regularizer into the traditional error function, some redundant output weights of ESN-BGSL0 are driven to zeros and pruned. Two examples are performed to illustrate the efficiency of the proposed algorithm in terms of estimation accuracy and network compactness.