The latest software update from Midjourney, an AI-based image maker, has caused excitement among graphic designers who enjoy its realistic art but also raised questions about the future of deep fakes. The update, released in mid-March, seems to have solved a major problem of the AI-generated software, which was its inability to accurately create lifelike human hands. Coupling this new development with other AI-generating softwares, like Dall-E 2 and Stable Diffusion, have sparked larger debates about the dangers of generated content that can be indistinguishable from authentic images.
Hany Farid, professor of digital forensics at the University of California at Berkeley, notes that unlike before, it is becoming harder to decipher AI-generated images since their level of detail has increased. Since these AI models are ingesting billions of images from the internet and recognizing patterns between the photos and text related to them, it has come to generate convincing photo-realistic art surrounding anything from mundane images of Santa Claus to highly specific requests such as a dachshund in space.
Such was the case this past week of images showing the arrest of former president Donald Trump that were generated on Midjourney and created a huge stir on the internet. This development of increasingly accurate images raises questions about the potential implications that it could have in future deep-fake campaigns. University of Florida beauty and AI professor Amelia Winger-Bearskin believes that the update is down to the data set that the software is fed and that it is possible the images of hands in the data set have simply increased.
Graphic designer Julie Wieland, who currently uses the AI-generated images for visual marketing campaigns, notes however that it has come at the cost of the artistic value in post-production as she used to relish touching up an AI-generated image’s hands.
The addition and improvement of AI-generated software coming from companies like Midjourney, OpenAI and Stable Diffusion doesn’t seem to be slowing down anytime soon and it is important to include additional safeguards and watermarks in future released updates to protect the vulnerable public from deep-fake campaigns that elicit emotional responses or disrupt the truth.