About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Context-Aware Systems and Applications. 10th EAI International Conference, ICCASA 2021, Virtual Event, October 28–29, 2021, Proceedings

Research Article

Recover Realistic Faces from Sketches

Download(Requires a free EAI acccount)
5 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-93179-7_9,
        author={Khoa Tan Truong and Khai Dinh Lai and Sang Thanh Nguyen and Thai Hoang Le},
        title={Recover Realistic Faces from Sketches},
        proceedings={Context-Aware Systems and Applications. 10th EAI International Conference, ICCASA 2021, Virtual Event,  October 28--29, 2021, Proceedings},
        proceedings_a={ICCASA},
        year={2022},
        month={1},
        keywords={Face sketch to image translation Generative adversarial networks (GANs) Sketch-based synthesis Face image generation Spatial attention Dual generator Conditional generative adversarial networks},
        doi={10.1007/978-3-030-93179-7_9}
    }
    
  • Khoa Tan Truong
    Khai Dinh Lai
    Sang Thanh Nguyen
    Thai Hoang Le
    Year: 2022
    Recover Realistic Faces from Sketches
    ICCASA
    Springer
    DOI: 10.1007/978-3-030-93179-7_9
Khoa Tan Truong1, Khai Dinh Lai1, Sang Thanh Nguyen1, Thai Hoang Le1,*
  • 1: Faculty of Information Technology
*Contact email: lhthai@fit.hcmus.edu.vn

Abstract

Currently, Generative Adversarial Networks (GANs) is considered as the best method to solve the challenge of synthesizing realistic images from sketch images. However, the effectiveness of this method depends mainly on setting up a loss function to learn the mapping between sketches and realistic images. This leads to how to choose an optimal loss function to map them. In this paper, we investigate and propose a loss function that combines pixel-based error and context-based error on a proper ratio to obtain the best training result. The proposed loss function will be utilized to train the generator’s U-Net architecture in greater detail. To convert a drawing to an actual image, the trained architecture will be applied. Based on two metrics that are the Structural Similarity Index (SSIM) and visual observations, the assessment results on the CUHK Face Sketch Database (CUFS), AR database (AR), and the CUHK ColorFERET Sketch Database (CUFSF) prove that the suggested method is feasible.

Keywords
Face sketch to image translation Generative adversarial networks (GANs) Sketch-based synthesis Face image generation Spatial attention Dual generator Conditional generative adversarial networks
Published
2022-01-06
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-93179-7_9
Copyright © 2021–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL