About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Mobile Multimedia Communications. 15th EAI International Conference, MobiMedia 2022, Virtual Event, July 22-24, 2022, Proceedings

Research Article

Dynamic Style Transferring and Content Preserving for Domain Generalization

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-23902-1_23,
        author={Chaoyi Wang and Liang Li and Yuhan Gao and Jiehua Zhang and Yefei Zhang and Yaoqi Sun and Weijun Qin and Jun Yin and Zhongyuan Wang},
        title={Dynamic Style Transferring and Content Preserving for Domain Generalization},
        proceedings={Mobile Multimedia Communications. 15th EAI International Conference, MobiMedia 2022, Virtual Event, July 22-24, 2022, Proceedings},
        proceedings_a={MOBIMEDIA},
        year={2023},
        month={2},
        keywords={Transfer learning Domain generalization Style transfer Content preserving},
        doi={10.1007/978-3-031-23902-1_23}
    }
    
  • Chaoyi Wang
    Liang Li
    Yuhan Gao
    Jiehua Zhang
    Yefei Zhang
    Yaoqi Sun
    Weijun Qin
    Jun Yin
    Zhongyuan Wang
    Year: 2023
    Dynamic Style Transferring and Content Preserving for Domain Generalization
    MOBIMEDIA
    Springer
    DOI: 10.1007/978-3-031-23902-1_23
Chaoyi Wang1, Liang Li2,*, Yuhan Gao, Jiehua Zhang1, Yefei Zhang1, Yaoqi Sun1, Weijun Qin, Jun Yin3, Zhongyuan Wang
  • 1: Hangzhou Dianzi University
  • 2: Institute of Computing Technology
  • 3: Zhejiang Dahua Technology CO., LTD.
*Contact email: liang.li@ict.ac.cn

Abstract

Although convolutional neural networks (CNNs) have shown remarkable ability in different computer vision tasks, they do not cope well with domain shifts. Recent studies show that the domain shift mainly results from the style or texture variation of images rather than the content. Inspired by this, we propose dynamic style transferring to overcome the style bias of CNNs. Specifically, we design a knowledge-injected attention mechanism for learning adaptive fusion weights and embedding the style knowledge of dynamic chosen images in latent space. So the extent of transferred style is controlled, and we can retain content-related information. Furthermore, we introduce the content preserving module, which builds an adversarial structure with the encoder to make the extracted style information more precise. For balancing the adversarial relationship between encoder and auxiliary predictor, we also introduce a consistency loss to empower the style-biased predictor and indirectly boost the encoder’s ability by extending the back-propagation process. We conduct extensive experiments on PACS and Office-Home datasets to evaluate the effectiveness of our method. Experiment results show remarkable performance over the state-of-the-art methods in the domain generalization.

Keywords
Transfer learning Domain generalization Style transfer Content preserving
Published
2023-02-01
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-23902-1_23
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL