About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Security and Privacy in New Computing Environments. 6th International Conference, SPNCE 2023, Guangzhou, China, November 25–26, 2023, Proceedings

Research Article

EncoderMU: Machine Unlearning in Contrastive Learning

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-73699-5_15,
        author={Zixin Wang and Bing Mi and Kongyang Chen},
        title={EncoderMU: Machine Unlearning in Contrastive Learning},
        proceedings={Security and Privacy in New Computing Environments. 6th International Conference, SPNCE 2023, Guangzhou, China, November 25--26, 2023, Proceedings},
        proceedings_a={SPNCE},
        year={2025},
        month={1},
        keywords={Machine Unlearning Contrastive Learning Distributed Learning},
        doi={10.1007/978-3-031-73699-5_15}
    }
    
  • Zixin Wang
    Bing Mi
    Kongyang Chen
    Year: 2025
    EncoderMU: Machine Unlearning in Contrastive Learning
    SPNCE
    Springer
    DOI: 10.1007/978-3-031-73699-5_15
Zixin Wang1, Bing Mi, Kongyang Chen1,*
  • 1: Institute of Artificial Intelligence and Blockchain
*Contact email: kychen@gzhu.edu.cn

Abstract

Machine unlearning is a complex process that necessitates the model to diminish the influence of the training data while keeping the loss of accuracy to a minimum. Despite the numerous studies on machine unlearning in recent years, the majority of them have primarily focused on supervised learning models, leaving research on contrastive learning models relatively underexplored. With the conviction that self-supervised learning harbors a promising potential, surpassing or rivaling that of supervised learning, we set out to investigate methods for machine unlearning centered around contrastive learning models. In this study, we introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning. Our method only necessitates a minimal number of training epochs and the identification of the data slated for unlearning. Remarkably, our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models, showcasing its versatility and adaptability in various learning paradigms.

Keywords
Machine Unlearning Contrastive Learning Distributed Learning
Published
2025-01-01
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-73699-5_15
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL