About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Intelligent Systems and Machine Learning. First EAI International Conference, ICISML 2022, Hyderabad, India, December 16-17, 2022, Proceedings, Part II

Research Article

Gesture Controlled Power Window Using Deep Learning

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-35081-8_28,
        author={Jatin Rane and Suhas Mohite},
        title={Gesture Controlled Power Window Using Deep Learning},
        proceedings={Intelligent Systems and Machine Learning. First EAI International Conference, ICISML 2022, Hyderabad, India, December 16-17, 2022, Proceedings, Part II},
        proceedings_a={ICISML PART 2},
        year={2023},
        month={7},
        keywords={computer vision open-cv LeNet keras gesture recognition},
        doi={10.1007/978-3-031-35081-8_28}
    }
    
  • Jatin Rane
    Suhas Mohite
    Year: 2023
    Gesture Controlled Power Window Using Deep Learning
    ICISML PART 2
    Springer
    DOI: 10.1007/978-3-031-35081-8_28
Jatin Rane1,*, Suhas Mohite1
  • 1: Department of Mechanical Engineering, College of Engineering Pune Technological University, Wellesley Road, Shivajinagar, Pune
*Contact email: ranejw21.mech@coep.ac.in

Abstract

Researchers are working to fill knowledge gaps as new eras are ushering in by the rapid growth of informatics and human-computer interaction. With speech-based communication, touch-free engagement with electronic gadgets is growing in popularity and offers consumers easy-to-use control mechanisms in areas other than the entertainment sector, including the inside of cars, these engagement modes are now being successfully used. In this study, real-time human gesture identification using computer vision is proven, and the possibility of hand gesture interaction in the automobile environment is investigated. With the use of this noncognitive computer user interface, actions can be carried out depending on movements that are detected. By adding Python modules to the system, the design is carried out on a Windows OS. The platforms used for identification are open-cv and keras. The vision-based algorithms recognize the gesture displayed on the screen. A recognition algorithm was trained in keras using the background removal technique and the LeNet architecture. In this paper, four models were created and their accuracy was compared. The convex hull and threshold model outperformed the other models.

Keywords
computer vision open-cv LeNet keras gesture recognition
Published
2023-07-10
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-35081-8_28
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL