Machine Learning and Intelligent Communications. 4th International Conference, MLICOM 2019, Nanjing, China, August 24–25, 2019, Proceedings

Research Article

Active Sampling Based on MMD for Model Adaptation

Download
76 downloads
  • @INPROCEEDINGS{10.1007/978-3-030-32388-2_34,
        author={Qi Zhang and Donghai Guan and Weiwei Yuan and Asad Khattak},
        title={Active Sampling Based on MMD for Model Adaptation},
        proceedings={Machine Learning and Intelligent Communications. 4th International Conference, MLICOM 2019, Nanjing, China, August 24--25, 2019, Proceedings},
        proceedings_a={MLICOM},
        year={2019},
        month={10},
        keywords={Active sampling Maximum mean discrepancy Transfer learning Characteristics Uncertainty},
        doi={10.1007/978-3-030-32388-2_34}
    }
    
  • Qi Zhang
    Donghai Guan
    Weiwei Yuan
    Asad Khattak
    Year: 2019
    Active Sampling Based on MMD for Model Adaptation
    MLICOM
    Springer
    DOI: 10.1007/978-3-030-32388-2_34
Qi Zhang,*, Donghai Guan,*, Weiwei Yuan,*, Asad Khattak1,*
  • 1: Zayed University
*Contact email: stonewell@nuaa.edu.cn, dhguan@nuaa.edu.cn, yuanweiwei@nuaa.edu.cn, Asad.Khattak@zu.ac.ae

Abstract

In this paper, we demonstrate a method for transfer learning with minimal supervised information. Recently, researchers have proposed various algorithms to solve transfer learning problems, especially the unsupervised domain adaptation problem. They mainly focus on how to learn a good common representation and use it directly for downstream task. Unfortunately, they ignore the fact that this representation may not capture target-specific feature for target task well. In order to solve this problem, this paper attempts to capture target-specific feature by utilizing labeled data in target domain. Now it’s a challenge that how to seek as little supervised information as possible to achieve good results. To overcome this challenge, we actively select instances for training and model adaptation based on MMD method. In this process, we try to label some valuable target data to capture target-specific feature and fine-tune the classifier networks. We choose a batch of data in target domain far from common representation space and having maximum entropy. The first requirement is helpful to learn a good representation for target domain and the second requirement tries to improve the classifier performance. Finally, we experiment with our method on several datasets which shows significant improvement and competitive advantage against common methods.