An AI-generated image showing an explosion near the Pentagon has gone viral, alarming experts who warn that this is just the beginning of the AI-generated content era. Jeffrey McGregor, CEO of Truepic, predicts an influx of AI-generated content surfacing on social media, creating worries around authenticity and ownership. AI image-generation sites such as DALL-E, Midjourney, and Stable Diffusion have grown in popularity in recent months, enabling users to prompt the sites to create images or artwork in a particular style, often resulting in deepfake content going viral. Alongside AI-generated images, trolls are now using voice-cloning to mimic celebrity voices, while scams are using it to fool people into parting with money. Sites like GPTZero have been created to detect fake text, and some professors now use AI-detection services to detect whether their students’ essays were written by chatbots like ChatGPT. According to Ben Colman, CEO of Reality Defender, the reason for fake images spreading online is that anybody can do this. The rise in generative AI has reached a tipping point in accessibility and quality, leading to the loss of trust in online reality.
AI-Generated Image of Explosion Near Pentagon Just the Beginning, Says Tech CEO
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.