Proceedings of the 2nd Multidisciplinary International Conference, MIC 2022, 12 November 2022, Semarang, Central Java, Indonesia

Research Article

An Efficient Segmentation of U-area and T-area on Facial Images by Using Matlab with Hough Transform and Viola-Jones Algorithm Base

Download175 downloads
  • @INPROCEEDINGS{10.4108/eai.12-11-2022.2327390,
        author={Indriyani  Indriyani and Ida Ayu Dwi Giriantari and Made  Sudarma and I Made  Widyantara},
        title={An Efficient Segmentation of U-area and T-area             on Facial Images by Using Matlab with Hough Transform and Viola-Jones Algorithm Base},
        proceedings={Proceedings of the 2nd Multidisciplinary International Conference, MIC 2022, 12 November 2022, Semarang, Central Java, Indonesia},
        publisher={EAI},
        proceedings_a={MIC},
        year={2023},
        month={2},
        keywords={u-area; t-area; viola-jones algorithm},
        doi={10.4108/eai.12-11-2022.2327390}
    }
    
  • Indriyani Indriyani
    Ida Ayu Dwi Giriantari
    Made Sudarma
    I Made Widyantara
    Year: 2023
    An Efficient Segmentation of U-area and T-area on Facial Images by Using Matlab with Hough Transform and Viola-Jones Algorithm Base
    MIC
    EAI
    DOI: 10.4108/eai.12-11-2022.2327390
Indriyani Indriyani1,*, Ida Ayu Dwi Giriantari2, Made Sudarma2, I Made Widyantara2
  • 1: Institute of Technology and Business (ITB) STIKOM Bali, Indonesia
  • 2: Udayana University, Indonesia
*Contact email: indry.joice@gmail.com

Abstract

The left and right cheeks and chin (U-area) and the forehead and nose (T-area) are useful for examining skin types. These areas provide crucial information to determine facial characteristics. The Matlab application function, particularly the Viola-Jones Algorithm base, is helpful in detecting the U and T areas. This application accurately detects normal images for face, including the eyes, nose, and mouth position. The T-area was determined by referring to the position of the eyes and nose, while the U-area was identified based on the eyes, nose, and mouth position. The accuracy in determining the two areas using this method was close to 100%.