inis 24(2): e2

Research Article

Vehicle Type Classification with Small Dataset and Transfer Learning Techniques

Download27 downloads
  • @ARTICLE{10.4108/eetinis.v11i2.4678,
        author={Quang-Tu Pham and Dinh-Dat Pham and Khanh-Ly Can and Hieu Dao To and Hoang-Dieu Vu},
        title={Vehicle Type Classification with Small Dataset and Transfer Learning Techniques},
        journal={EAI Endorsed Transactions on Industrial Networks and Intelligent Systems},
        volume={11},
        number={2},
        publisher={EAI},
        journal_a={INIS},
        year={2024},
        month={3},
        keywords={small dataset, Deep Learning, Transfer Learning, Knowledge Distillation},
        doi={10.4108/eetinis.v11i2.4678}
    }
    
  • Quang-Tu Pham
    Dinh-Dat Pham
    Khanh-Ly Can
    Hieu Dao To
    Hoang-Dieu Vu
    Year: 2024
    Vehicle Type Classification with Small Dataset and Transfer Learning Techniques
    INIS
    EAI
    DOI: 10.4108/eetinis.v11i2.4678
Quang-Tu Pham1, Dinh-Dat Pham1, Khanh-Ly Can1, Hieu Dao To1, Hoang-Dieu Vu1,*
  • 1: Phenikaa University
*Contact email: dieu.vuhoang@phenikaa-uni.edu.vn

Abstract

This study delves into the application of deep learning training techniques using a restricted dataset, encompassing around 400 vehicle images sourced from Kaggle. Faced with the challenges of limited data, the impracticality of training models from scratch becomes apparent, advocating instead for the utilization of pre-trained models with pre-trained weights. The investigation considers three prominent models—EfficientNetB0, ResNetB0, and MobileNetV2—with EfficientNetB0 emerging as the most proficient choice. Employing the gradually unfreeze layer technique over a specified number of epochs, EfficientNetB0 exhibits remarkable accuracy, reaching 99.5% on the training dataset and 97% on the validation dataset. In contrast, training models from scratch results in notably lower accuracy. In this context, knowledge distillation proves pivotal, overcoming this limitation and significantly improving accuracy from 29.5% in training and 20.5% in validation to 54% and 45%, respectively. This study uniquely contributes by exploring transfer learning with gradually unfreeze layers and elucidates the potential of knowledge distillation. It highlights their effectiveness in robustly enhancing model performance under data scarcity, thus addressing challenges associated with training deep learning models on limited datasets. The findings underscore the practical significance of these techniques in achieving superior results when confronted with data constraints in real-world scenarios