As we become increasingly more advanced in technology, we are taking great care to ensure that what is put out there is the truth. Deepfake images of Donald Trump and Pope Francis, generated by an AI program called Midjourney, have been seen extensively online and have caused concern in both the public and private sectors. Founded by David Holz last year, Midjourney has since suspended free trials due to extraordinary demand and trial abuse. However, it is not just this platform people are worried about, as OpenAI’s Dall.E 2 and Stable Diffusion are also capable of creating deepfake images. These fake images come with serious repercussions, including fake news about politicians and non-consensual pornographic images.
Fortunately, there are four tips to help distinguish AI-generated images from the real thing. One feature, according to AI expert Henry Ajder, is a “plasticky” appearance in images made with Midjourney. AI programs have been seen to struggle with semantic consistencies such as lighting, shapes, and subtlety, with some examples including incorrect lighting, over-exaggerated eyebrows, and bone structure. Additionally, Ajder advises questioning suspicious images and testing the context, and suggests utilizing reverse image search tools for fact-checking.
Meta’s Reality Labs European advisory council member Henry Ajder, is an expert in AI who is actively working to protect the public from deepfake images. In addition to presenting on the dangers posed by artificial intelligence, Ajder has been advising organizations on how to protect against the dangers of deepfakes. ID R&D founder Alexey Khitrov warns people to check for things such as the physical impossibility of the items being portrayed in the images. Khitrov advises people try to do a search on the image, looking for a more authoritative source with known fact-checking capabilities. Fake images are a serious issue, and it is important to be diligent and aware of the facts before believing or sharing an image or video.