About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Mobile Networks and Management. 13th EAI International Conference, MONAMI 2023, Yingtan, China, October 27-29, 2023, Proceedings

Research Article

Image Deblurring Using Fusion Transformer-Based Generative Adversarial Networks

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-55471-1_11,
        author={Jionghui Wang and Zhilin Xiong and Xueyu Huang and Haoyu Shi and Jiale Wu},
        title={Image Deblurring Using Fusion Transformer-Based Generative Adversarial Networks},
        proceedings={Mobile Networks and Management. 13th EAI International Conference, MONAMI 2023, Yingtan, China, October 27-29, 2023, Proceedings},
        proceedings_a={MONAMI},
        year={2024},
        month={3},
        keywords={image deblurring Transformer multi-head attention GAN multi-scale fusion},
        doi={10.1007/978-3-031-55471-1_11}
    }
    
  • Jionghui Wang
    Zhilin Xiong
    Xueyu Huang
    Haoyu Shi
    Jiale Wu
    Year: 2024
    Image Deblurring Using Fusion Transformer-Based Generative Adversarial Networks
    MONAMI
    Springer
    DOI: 10.1007/978-3-031-55471-1_11
Jionghui Wang1,*, Zhilin Xiong2, Xueyu Huang2, Haoyu Shi2, Jiale Wu2
  • 1: Minmetals Exploration and Development Co. Ltd.
  • 2: School of Software Engineering, Jiangxi University of Science and Technology
*Contact email: wangjh@minmetals.com

Abstract

Using the Transformer for motion deblurring enables a broader receptive field, and by stacking multiple Transformer modules, it captures global correlations in features. However, this increases network complexity and poses convergence challenges. To address this, a Generative Adversarial Network called XT-GAN, which combines multiple-scale Transformers, has been proposed.XT-GAN leverages pyramid features from a convolutional network as a lightweight substitute for multi-scale inputs. Within the output pyramid convolutional features, different-scale features are computed in parallel using multi-head self-attention. These features are combined with a proposed feature enhancement module to represent information at different scales. Finally, the network outputs from various modules are concatenated and restored to the original image size.In experiments conducted on the synthetic dataset GoPro, XT-GAN outperformed ordinary networks such as DeblurGAN, DeepDeblur, and SRN. It achieved a reduction in computational complexity of at least 70% while achieving PSNR and SSIM values of 29.13dB and 0.923, respectively. XT-GAN also demonstrated good robustness in the real dataset RealBlur-J, with PSNR and SSIM values of 28.40 and 0.852. It effectively handles motion blur in real-world scenarios, suppresses image artifacts, and restores natural and clear details.

Keywords
image deblurring Transformer multi-head attention GAN multi-scale fusion
Published
2024-03-17
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-55471-1_11
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL