About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Digital Forensics and Cyber Crime. 13th EAI International Conference, ICDF2C 2022, Boston, MA, November 16-18, 2022, Proceedings

Research Article

Can Image Watermarking Efficiently Protect Deep-Learning-Based Image Classifiers? – A Preliminary Security Analysis of an IP-Protecting Method

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-36574-4_3,
        author={Jia-Hui Xie and Di Wu and Bo-Hao Zhang and Hai Su and Huan Yang},
        title={Can Image Watermarking Efficiently Protect Deep-Learning-Based Image Classifiers? -- A Preliminary Security Analysis of an IP-Protecting Method},
        proceedings={Digital Forensics and Cyber Crime. 13th EAI International Conference, ICDF2C 2022, Boston, MA, November 16-18, 2022, Proceedings},
        proceedings_a={ICDF2C},
        year={2023},
        month={7},
        keywords={Blind watermarking Intellectual property protection Image steganography Watermark extraction Steganalysis Evasion attacks Spoofing attacks Robustness attacks},
        doi={10.1007/978-3-031-36574-4_3}
    }
    
  • Jia-Hui Xie
    Di Wu
    Bo-Hao Zhang
    Hai Su
    Huan Yang
    Year: 2023
    Can Image Watermarking Efficiently Protect Deep-Learning-Based Image Classifiers? – A Preliminary Security Analysis of an IP-Protecting Method
    ICDF2C
    Springer
    DOI: 10.1007/978-3-031-36574-4_3
Jia-Hui Xie1, Di Wu1, Bo-Hao Zhang1, Hai Su1, Huan Yang1,*
  • 1: School of Software, South China Normal University, Foshan
*Contact email: huan.yang@m.scnu.edu.cn

Abstract

Being widely adopted by an increasingly rich array of classification tasks in different industries, image classifiers based on deep neural networks (DNNs) have successfully helped boost business efficiency and reduce costs. To protect the intellectual property (IP) of DNN classifiers, a blind-watermarking-based technique that opens “backdoors” through image steganography has been proposed. However, it is yet to explore whether this approach can effectively protect DNN models under practical settings where malicious attacks may be launched against it. In this paper, we study the feasibility and effectiveness of this previously proposed blind-watermarking-based DNN classifier protection technique from the security perspective (Our code is available athttps://github.com/ByGary/Security-of-IP-Protection-Frameworks.). We first show that, IP protection offered by the original algorithm, when trained with(256\,\times \,256)images, can easily be evaded due to obvious visibility issue. Adapting the original approach by replacing its steganalyzer with watermark extraction algorithm and revising the overall training strategy, we are able to mitigate the visibility issue. Furthermore, we evaluate our improved approaches under three simple yet practical attacks, i.e., evasion attacks, spoofing attacks, and robustness attacks. Our evaluation results reveal that further security enhancements are indispensable for the practical applications of the examined blind-watermarking-based DNN image classifier protection scheme, providing a set of guidelines and precautions to facilitate improved protection of intellectual property of DNN classifiers.

Keywords
Blind watermarking Intellectual property protection Image steganography Watermark extraction Steganalysis Evasion attacks Spoofing attacks Robustness attacks
Published
2023-07-16
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-36574-4_3
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL