Media Forensics
I designed and developed deep learning models to detect both physical and digital fraud artifacts on identification documents. However, these documents cannot be visualsied here for privacy reasons.

So I employed these same methods on popular fake media throughout the internet. Here are some interesting results:
PROJECT DETAILS
Year Jan 2020 - Present
Scope Fake media, Image manipulation, Deep learning
Digital Media - Case 1
On the far left, you can see an image of an original newspaper clipping. Next to it is a manipulated image which the Trump campaign had blanketed their offices with. More information on that here. Towards the right, we have localised heatmaps of potentially tampered regions. As you can see, the article's headline and image have been replaced.

While observant tweeters were able to identify inconsistencies in font type (Compare the 'R' in GORE and PRESIDENT), these visual artifacts are subtle and the tweeters had prior information that the image was photoshopped. On a larger scale and on a daily basis, it quickly becomes almost impossible to keep an eye out for subtle tamper artifacts in every image we see and humans are not very good at that either.
Such digital manipulation methods can be used to not only corroborate existing visual evidence of tampering but also automate the process of detecting manipulated images at scale.
Case 2
A Facebook post with a doctored photograph claimed that a BJP spokesperson Sambit Patra prostrated himself on the Pakistani national flag. The photo was posted on a page with a large following and expectedly, went viral. The image was confirmed to be manipulated by a fact-checking organisation, AFP. The original version did not contain a Pakistani flag. More info on that here.

On the far left, you can see the original image posted by Mr. Sambit on Twitter. Next to it is the Photoshopped image. The heatmap while present at the tampered region, is not too significant.
The doctored image I had access to was of low resolution and it gets progressively harder to find pixel-level anomalies in images where information is lost through compression. However, the strength of forensics lies in the multiples perspectives available to us when evaluating an image. Below we use 2 completely different approaches to further strengthten our claim that this image is indeed photoshopped.
On the right is a heatmap generated from a neural network model called Noiseprint. It works on the premise that photographs contain noise that is specififc to the camera model which took them. So every photo contains the unique signature of the camera that took it. Noiseprint is trained to recognize signatures from different cameras and detect inconsistencies in the signature across an image.
In this map, we can see a distinct blue patch which has a completely different camera signature from the rest of the redder locations. This implies that the patch comes from a different camera hardware and has mostly been spliced onto this photo.

On the left is a dump of the JPEG header information taken from the above doctored photo.
Popular photo-editing softwares like Photoshop embed non-graphic information into the metadata of JPEG images. Photoshop in particular uses IPTC format to to embed data about asset layers, paths etc.
As can be seen in the image, the marker called 'APP13' is used in JPEG images to designate a Photoshop Image Resource (PSIR). The presence of this marker indicates that Photoshop embedded this information here and implies that this image went through a Photoshop workflow. More on this here.

These 3 pieces of evidence suggest a strong case of photo manipulation on their own and these methods can be comined to automate for a large number of images.