About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Mobile and Ubiquitous Systems: Computing, Networking and Services. 19th EAI International Conference, MobiQuitous 2022, Pittsburgh, PA, USA, November 14-17, 2022, Proceedings

Research Article

Towards Cross Domain CSI Action Recognition Through One-Shot Bimodal Domain Adaptation

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-34776-4_16,
        author={Bao Zhou and Rui Zhou and Yue Luo and Yu Cheng},
        title={Towards Cross Domain CSI Action Recognition Through One-Shot Bimodal Domain Adaptation},
        proceedings={Mobile and Ubiquitous Systems: Computing, Networking and Services. 19th EAI International Conference, MobiQuitous 2022, Pittsburgh, PA, USA, November 14-17, 2022, Proceedings},
        proceedings_a={MOBIQUITOUS},
        year={2023},
        month={6},
        keywords={Action recognition Modal fusion Data synthesis Domain adaptation},
        doi={10.1007/978-3-031-34776-4_16}
    }
    
  • Bao Zhou
    Rui Zhou
    Yue Luo
    Yu Cheng
    Year: 2023
    Towards Cross Domain CSI Action Recognition Through One-Shot Bimodal Domain Adaptation
    MOBIQUITOUS
    Springer
    DOI: 10.1007/978-3-031-34776-4_16
Bao Zhou, Rui Zhou,*, Yue Luo, Yu Cheng
    *Contact email: ruizhou@uestc.edu.cn

    Abstract

    Human action recognition based on WiFi Channel State Information (CSI) has attracted enormous attention in recent years. Although performing well under supervised learning, the recognition model suffers from significant performance degradation when applied in a new domain (e.g. a new environment, a different location, or a new user). To enable the recognition model robust to domains, researchers have proposed various methods, including semi-supervised domain adaptation, unsupervised domain adaptation, and domain generalization. Semi-supervised and unsupervised solutions still require a large number of partially-labeled or unlabeled samples in the new domain, while domain generalization solutions have difficulties in achieving acceptable accuracy. To mitigate these problems, we propose a one-shot bimodal domain adaptation method to achieve cross domain action recognition with much reduced effort. The method contains two key points. One is that it synthesizes virtual samples to augment the training datatset of the target domain, requiring only one sample per action in the target domain. The other is that it regards the amplitude and the phase as two consistent modals and fuses them to enhance the recognition accuracy. Virtual data synthesis is achieved by linear transformation with dynamic domain weights and the synthesis autoencoder. Bimodal fusion is achieved by the fusion autoencoder and feature concatenation under the criterion of consistency. Evaluations on daily activities achieved the average accuracy of 85.03% and 90.53% at target locations, 87.90% and 82.40% in target rooms. Evaluations on hand gestures achieved the average accuracy of 91.67% and 85.53% on target users, 83.04% and 88.01% in target rooms.

    Keywords
    Action recognition Modal fusion Data synthesis Domain adaptation
    Published
    2023-06-27
    Appears in
    SpringerLink
    http://dx.doi.org/10.1007/978-3-031-34776-4_16
    Copyright © 2022–2025 ICST
    EBSCOProQuestDBLPDOAJPortico
    EAI Logo

    About EAI

    • Who We Are
    • Leadership
    • Research Areas
    • Partners
    • Media Center

    Community

    • Membership
    • Conference
    • Recognition
    • Sponsor Us

    Publish with EAI

    • Publishing
    • Journals
    • Proceedings
    • Books
    • EUDL