About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Artificial Intelligence for Communications and Networks. Third EAI International Conference, AICON 2021, Xining, China, October 23–24, 2021, Proceedings, Part I

Research Article

Transformer-Based Few-Shot Learning for Image Classification

Download(Requires a free EAI acccount)
5 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-90196-7_6,
        author={Tao Gan and Weichao Li and Yuanzhe Lu and Yanmin He},
        title={Transformer-Based Few-Shot Learning for Image Classification},
        proceedings={Artificial Intelligence for Communications and Networks. Third EAI International Conference, AICON 2021, Xining, China, October 23--24, 2021, Proceedings, Part I},
        proceedings_a={AICON},
        year={2021},
        month={11},
        keywords={Few-shot learning Classification Transformer Regularization},
        doi={10.1007/978-3-030-90196-7_6}
    }
    
  • Tao Gan
    Weichao Li
    Yuanzhe Lu
    Yanmin He
    Year: 2021
    Transformer-Based Few-Shot Learning for Image Classification
    AICON
    Springer
    DOI: 10.1007/978-3-030-90196-7_6
Tao Gan1,*, Weichao Li1, Yuanzhe Lu1, Yanmin He1
  • 1: School of Information and Software Engineering, University of Electronic Science and Technology of China
*Contact email: gantao@uestc.edu.cn

Abstract

Few-shot learning (FSL) remains a challenging research endeavor. Traditional few-shot learning methods mainly consider the distance relationship between the query set and the support set, while the context information between different support sets are not fully exploited. This paper proposes a Transformer-based few-shot learning method (TML). By taking advantage of the self-attention mechanism of Transformer, TML effectively exploits the correlation between support sets so as to learn highly discriminative global features. Furthermore, in order to cope with the overfitting problem introduced by the increase of model complexity, we introduce a classification loss into the total loss function as a regularization term. To overcome the limit of traditional cross-entropy loss, a label refinement method is used to refine the label assignment for classification. The experimental results show that TML improves the ability of learning hard samples and achieves higher classification accuracy than existing state-of-the-art few-shot learning methods.

Keywords
Few-shot learning Classification Transformer Regularization
Published
2021-11-03
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-90196-7_6
Copyright © 2021–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL