About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Collaborative Computing: Networking, Applications and Worksharing. 16th EAI International Conference, CollaborateCom 2020, Shanghai, China, October 16–18, 2020, Proceedings, Part I

Research Article

An Efficient Approach for Parameters Learning of Bayesian Network with Multiple Latent Variables Using Neural Networks and P-EM

Download(Requires a free EAI acccount)
3 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-67537-0_22,
        author={Kaiyu Song and Kun Yue and Xinran Wu and Jia Hao},
        title={An Efficient Approach for Parameters Learning of Bayesian Network with Multiple Latent Variables Using Neural Networks and P-EM},
        proceedings={Collaborative Computing: Networking, Applications and Worksharing. 16th EAI International Conference, CollaborateCom 2020, Shanghai, China, October 16--18, 2020, Proceedings, Part I},
        proceedings_a={COLLABORATECOM},
        year={2021},
        month={1},
        keywords={Bayesian network Latent variable Generate adversarial network Recurrent neural network Parameter learning Expectation maximization},
        doi={10.1007/978-3-030-67537-0_22}
    }
    
  • Kaiyu Song
    Kun Yue
    Xinran Wu
    Jia Hao
    Year: 2021
    An Efficient Approach for Parameters Learning of Bayesian Network with Multiple Latent Variables Using Neural Networks and P-EM
    COLLABORATECOM
    Springer
    DOI: 10.1007/978-3-030-67537-0_22
Kaiyu Song1, Kun Yue1,*, Xinran Wu1, Jia Hao1
  • 1: School of Information Science and Engineering, Yunnan University
*Contact email: kyue@ynu.edu.cn

Abstract

Bayesian network with multiple latent variables (BNML) is used to model realistic problems with unobservable features, such as diagnosing diseases and preference modeling. However, EM based parameter learning for BNML is challenging if there is a large amount of intermediate results due to missing values in the training dataset. To address this issue, we propose the clustering and P-EM based method to improve the performance of parameter learning. First, an innovative layer of neural network is defined based on Recurrent Neural Network (RNN) by incorporating the structural information of BNML into the Mixture of Generative Adversarial Network (MGAN), which can reduce the number of parameters by enabling clustering in an unsupervised manner. We then propose a Parabolic acceleration of the EM (P-EM) algorithm to improve the efficiency of convergence of parameter learning. In our method, the geometry knowledge is adopted to obtain an approximation of the parameters. Experimental results show the efficiency and effectiveness of our proposed methods.

Keywords
Bayesian network Latent variable Generate adversarial network Recurrent neural network Parameter learning Expectation maximization
Published
2021-01-22
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-67537-0_22
Copyright © 2020–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL