
Research Article
Deep Adversarial Neural Network Based on Transformer Encoder for Specific Emitter Identification Under Varying SNR
@INPROCEEDINGS{10.1007/978-3-031-60347-1_31, author={Chang Liu and Zhigang Li and Haoran Zha and Qiao Tian and Meiyu Wang}, title={Deep Adversarial Neural Network Based on Transformer Encoder for Specific Emitter Identification Under Varying SNR}, proceedings={Mobile Multimedia Communications. 16th EAI International Conference, MobiMedia 2023, Guilin, China, July 22-24, 2023, Proceedings}, proceedings_a={MOBIMEDIA}, year={2024}, month={10}, keywords={unsupervised domain adaptation specific emitter identification domain adversarial neural network transformer encoder}, doi={10.1007/978-3-031-60347-1_31} }
- Chang Liu
Zhigang Li
Haoran Zha
Qiao Tian
Meiyu Wang
Year: 2024
Deep Adversarial Neural Network Based on Transformer Encoder for Specific Emitter Identification Under Varying SNR
MOBIMEDIA
Springer
DOI: 10.1007/978-3-031-60347-1_31
Abstract
Specific Emitter Identification (SEI) is a technology that distinguishes between the unique hardware differences inherent in different emitters. In practical applications, due to the lack of labeled datasets, transfering labeled source domains to unlabeled target domains is critical, however, individual signals of different emitters will be disturbed by different degrees of noise during the propagation process, the model performance degrades due to differences between domains caused by different noises. To solve this challenge, we introduce unsupervised domain adaptation (UDA) to SEI of different noises, the main principle of UDA is to reduce the difference between the labeled source domain and the unlabeled target domain, and learn domain invariant features between the two domains. In this paper, we propose to use domain adversarial neural network (DANN) based on transformer encoder (DANN-Transformer) for SEI of different noises, this domain adaptation behavior achieves adversarial effects by adding new gradient reversal layers, the transformer encoder can better extract the contextual relevance of signals, and provide deeper transferable features. Finally, experiment on the real ADS-B dataset, when the SNR is between -20dB and -5dB, DANN-Transformer shows superior performance compared to other baseline models. In addition, it also has good anti-noise performance and the performance of more than 95% can still be achieved when the number of target domain samples is 200.