With the advent of Web 2.0 and ubiquitous adoption of low-cost and high-resolution digital cameras, users upload and share images on a daily basis. This trend of public image distribution and access to user-friendly editing software such as Photoshop and GIMP has made image forgery a serious issue. Splicing is one of the most common types of image forgery. It manipulates images by copying a region from one image (i.e., the donor image) and pasting it onto another image (i.e., the host image). Forgers often use splicing to give a false impression that there is an additional object present in the image, or to remove an object from the image. Image splicing can be potentially used in generating false propaganda for political purposes. For example, during the 2004 U.S. presidential election campaign, an image that showed John Kerry and Jane Fonda speaking together at an anti-Vietnam war protest was released and circulated. It was discovered later that this was a spliced image, and was created for political purposes. The spliced image and the two original authentic images that were used to create the spliced image can be seen above.
Many of the current splicing detection algorithms only deduce whether a given image has been spliced and do not attempt to localize the spliced area. Relatively few algorithms attempt to tackle the splicing localization problem, which refers to the problem of determining which pixels in an image have been manipulated as a result of a splicing operation.
Ronald Salloum and Professor Jay Kuo are currently working on an image splicing localization research project. They are exploring the use of deep learning and data-driven techniques to develop an effective solution to the problem of image splicing localization. They published a paper [1] in which they proposed the use of a multi-task fully convolutional network (MFCN), which was shown to significantly outperform existing splicing localization techniques. The MFCN is simultaneously trained on the surface label (which indicates whether each pixel in an image is spliced or authentic) and the edge label (which indicates whether each pixel belongs to the boundary of the spliced region). Some examples of localization output from the MFCN are shown above. Please refer to the paper [1] for further information.
Reference
[1] Ronald Salloum, Yuzhuo Ren, C.-C. Jay Kuo, Image Splicing Localization using a Multi-task Fully Convolutional Network (MFCN), Journal of Visual Communication and Image Representation, Volume 51, February 2018, Pages 201-209, ISSN 1047-3203, https://doi.org/10.1016/j.jvcir.2018.01.010.
By Ronald Salloum