About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Intelligent Systems and Machine Learning. First EAI International Conference, ICISML 2022, Hyderabad, India, December 16-17, 2022, Proceedings, Part II

Research Article

Real-Time Identification of Medical Equipment Using Deep CNN and Computer Vision

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-35081-8_25,
        author={Jaya Rubi and R. J. Hemalatha and Bethanney Janney},
        title={Real-Time Identification of Medical Equipment Using Deep CNN and Computer Vision},
        proceedings={Intelligent Systems and Machine Learning. First EAI International Conference, ICISML 2022, Hyderabad, India, December 16-17, 2022, Proceedings, Part II},
        proceedings_a={ICISML PART 2},
        year={2023},
        month={7},
        keywords={Deep CNN surgical equipment computer vision kera’s implementation Gesture recognition Image processing},
        doi={10.1007/978-3-031-35081-8_25}
    }
    
  • Jaya Rubi
    R. J. Hemalatha
    Bethanney Janney
    Year: 2023
    Real-Time Identification of Medical Equipment Using Deep CNN and Computer Vision
    ICISML PART 2
    Springer
    DOI: 10.1007/978-3-031-35081-8_25
Jaya Rubi1,*, R. J. Hemalatha1, Bethanney Janney2
  • 1: Department of Biomedical Engineering, Vels Institute of Science, Technology and Advanced Studies
  • 2: Department of Biomedical Engineering
*Contact email: jayarubiap@gmail.com

Abstract

Sign language is a way of communication in which hand gestures and symbols are used to connect with each other. Communication provides interaction among people to exchange feelings and ideas. Similarly, when it comes to the handling of medical equipment using a robot, sign language should not be a barrier to carrying out such applications. The purpose of this work is to provide a real-time system that can convert Sign Language (ISL) to text format. Most of the work is based on the handcrafted feature. This paper concentrates on introducing a deep learning approach that can classify the signs using the convolutional neural network. First, we make a classifier model using the signs, then using Kera’s implementation of convolutional neural network using python we analyze those signs and identify the surgical tools. Then we process another real-time system that uses skin segmentation to find the Region of Interest in the frame. The segmented region is fed to the classifier model to predict the sign. The predicted sign would gradually identify the surgical tool and convert the sign into text.

Keywords
Deep CNN surgical equipment computer vision kera’s implementation Gesture recognition Image processing
Published
2023-07-10
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-35081-8_25
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL