About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
el 22(3): e4

Research Article

Model Protection Scheme Against Distillation Attack in Internet of Vehicles

Download86 downloads
Cite
BibTeX Plain Text
  • @ARTICLE{10.4108/eetel.v8i3.3318,
        author={Weiping Peng and Jiabao Liu and Yuan Ping and Di Ma},
        title={Model Protection Scheme Against Distillation Attack in Internet of Vehicles},
        journal={EAI Endorsed Transactions on e-Learning},
        volume={8},
        number={3},
        publisher={EAI},
        journal_a={EL},
        year={2023},
        month={6},
        keywords={Internet of vehicles, Privacy protection, Distillation immunity, Model reinforcement, Differential privacy},
        doi={10.4108/eetel.v8i3.3318}
    }
    
  • Weiping Peng
    Jiabao Liu
    Yuan Ping
    Di Ma
    Year: 2023
    Model Protection Scheme Against Distillation Attack in Internet of Vehicles
    EL
    EAI
    DOI: 10.4108/eetel.v8i3.3318
Weiping Peng1, Jiabao Liu1,*, Yuan Ping2, Di Ma1
  • 1: Henan Polytechnic University
  • 2: Xuchang University
*Contact email: jiabaoliu@home.hpu.edu.cn

Abstract

Aiming at the problems of model security and user data disclosure caused by the deep learning model in the Internet of Vehicles scenario, which can be stolen by malicious roadside units or base stations and other attackers through knowledge distillation and other techniques, this paper proposes a scheme to strengthen prevent against distillation. The scheme exploits the idea of model reinforcement such as model self-learning and attention mechanism to maximize the difference between the pre-trained model and the normal model without sacrificing performance. It also combines local differential privacy technology to reduce the effectiveness of model inversion attacks. Our experimental results on several datasets show that this method is effective for both standard and data-free knowledge distillation, and provides better model protection than passive defense.

Keywords
Internet of vehicles, Privacy protection, Distillation immunity, Model reinforcement, Differential privacy
Received
2023-05-07
Accepted
2023-05-21
Published
2023-06-27
Publisher
EAI
http://dx.doi.org/10.4108/eetel.v8i3.3318

Copyright © 2022 Weiping Peng et al., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NCSA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.

EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL