Research Article
Behind the AI Art Creation: A Study of Generative Models for Text-to-Image Generation
@INPROCEEDINGS{10.4108/eai.15-9-2023.2340842, author={Lege Zhao and Han Zhang}, title={Behind the AI Art Creation: A Study of Generative Models for Text-to-Image Generation}, proceedings={Proceedings of the 2nd International Conference on Art Design and Digital Technology, ADDT 2023, September 15--17, 2023, Xi’an, China}, publisher={EAI}, proceedings_a={ADDT}, year={2024}, month={1}, keywords={ai art creation; test-to-image generation; generative models}, doi={10.4108/eai.15-9-2023.2340842} }
- Lege Zhao
Han Zhang
Year: 2024
Behind the AI Art Creation: A Study of Generative Models for Text-to-Image Generation
ADDT
EAI
DOI: 10.4108/eai.15-9-2023.2340842
Abstract
The advancement of deep learning has greatly facilitated computer vision and natural language processing. Among its applications is text-to-image generation, which involves creating images from textual descriptions. Recent text-to-image techniques offer a compelling yet straightforward ability to convert text into images, making them a prominent research topic in both AI and art creation. Image generation from text holds a myriad of practical and creative applications in computer design and the creation of digital art. This paper conducts a comprehensive study to review three types of generative models for text-to-image generation, aiming to provide a foundational understanding of the principles underlying these models.
Copyright © 2023–2024 EAI