Digital Forensics and Cyber Crime. Fifth International Conference, ICDF2C 2013, Moscow, Russia, September 26-27, 2013, Revised Selected Papers

Research Article

Robust Copy-Move Forgery Detection Based on Dual-Transform

Download
513 downloads
  • @INPROCEEDINGS{10.1007/978-3-319-14289-0_1,
        author={Munkhbaatar Doyoddorj and Kyung-Hyune Rhee},
        title={Robust Copy-Move Forgery Detection Based on Dual-Transform},
        proceedings={Digital Forensics and Cyber Crime. Fifth International Conference, ICDF2C 2013, Moscow, Russia, September 26-27, 2013, Revised Selected Papers},
        proceedings_a={ICDF2C},
        year={2015},
        month={2},
        keywords={Passive image forensics Copy-move forgery Dual-transform Duplicated region detection Mixture Post-processing},
        doi={10.1007/978-3-319-14289-0_1}
    }
    
  • Munkhbaatar Doyoddorj
    Kyung-Hyune Rhee
    Year: 2015
    Robust Copy-Move Forgery Detection Based on Dual-Transform
    ICDF2C
    Springer
    DOI: 10.1007/978-3-319-14289-0_1
Munkhbaatar Doyoddorj1,*, Kyung-Hyune Rhee1,*
  • 1: Pukyong National University
*Contact email: d_mbtr@pknu.ac.kr, khrhee@pknu.ac.kr

Abstract

With the increasing popularity of digital media and the ubiquitous availability of media editing software, innocuous multimedia are easily tampered for malicious purposes. Copy-move forgery is one important category of image forgery, in which a part of an image is duplicated, and substitutes another part of the same image at a different location. Many schemes have been proposed to detect and locate the forged regions. However, these schemes fail when the copied region is affected by post-processing operations before being pasted. To rectify the problem and further improve the detection accuracy, we propose a robust copy-move forgery detection method based on dual-transform to detect such specific artifacts, in which a cascade of Radon transform (RT) and Discrete Cosine Transform (DCT) is used. It will be shown that the dual-transform coefficients well conform the efficient assumption and therefore leads to more robust feature extraction results. Experimental results demonstrate that our method is robust not only to noise contamination, blurring, and JPEG compression, but also to region scaling, rotation and flipping, respectively.