
Research Article
Recover Realistic Faces from Sketches
@INPROCEEDINGS{10.1007/978-3-030-93179-7_9, author={Khoa Tan Truong and Khai Dinh Lai and Sang Thanh Nguyen and Thai Hoang Le}, title={Recover Realistic Faces from Sketches}, proceedings={Context-Aware Systems and Applications. 10th EAI International Conference, ICCASA 2021, Virtual Event, October 28--29, 2021, Proceedings}, proceedings_a={ICCASA}, year={2022}, month={1}, keywords={Face sketch to image translation Generative adversarial networks (GANs) Sketch-based synthesis Face image generation Spatial attention Dual generator Conditional generative adversarial networks}, doi={10.1007/978-3-030-93179-7_9} }
- Khoa Tan Truong
Khai Dinh Lai
Sang Thanh Nguyen
Thai Hoang Le
Year: 2022
Recover Realistic Faces from Sketches
ICCASA
Springer
DOI: 10.1007/978-3-030-93179-7_9
Abstract
Currently, Generative Adversarial Networks (GANs) is considered as the best method to solve the challenge of synthesizing realistic images from sketch images. However, the effectiveness of this method depends mainly on setting up a loss function to learn the mapping between sketches and realistic images. This leads to how to choose an optimal loss function to map them. In this paper, we investigate and propose a loss function that combines pixel-based error and context-based error on a proper ratio to obtain the best training result. The proposed loss function will be utilized to train the generator’s U-Net architecture in greater detail. To convert a drawing to an actual image, the trained architecture will be applied. Based on two metrics that are the Structural Similarity Index (SSIM) and visual observations, the assessment results on the CUHK Face Sketch Database (CUFS), AR database (AR), and the CUHK ColorFERET Sketch Database (CUFSF) prove that the suggested method is feasible.