OpenAI’s new AI tool generating surreal videos from text prompts sparks concerns
OpenAI, a prominent artificial intelligence (AI) research lab, has unveiled a new tool that can generate ultra-realistic videos using AI technology, raising concerns about potential misuse and manipulation of voters ahead of elections. The tool, called Sora, has the capability to create highly detailed videos of up to 60 seconds, incorporating complex scenes, camera movements, and vibrant emotions.
OpenAI showcased several sample videos created using Sora, which displayed surreal and lifelike visuals. One video depicted a couple walking through a snowy Tokyo street, while another showcased wooly mammoths navigating a snowy landscape with mountains in the background. These videos were generated based on detailed text prompts provided to the AI tool.
While OpenAI acknowledged the risks associated with the widespread use of such technology, concerns have been raised by experts and social media users, especially in an election year like the United States. The potential misuse of AI-generated videos, including deepfake content and chatbots spreading political misinformation, is a significant area of concern.
Rachel Tobac, an ethical hacker and member of the technical advisory council of the US government’s Cybersecurity and Infrastructure Security Agency (CISA), expressed her worries about the tool’s potential to trick and manipulate the general public. She highlighted the possibility of adversaries using Sora to create videos falsely depicting vaccine side effects or exaggerated long lines on Election Day, discouraging people from voting.
OpenAI stated that it is taking several safety precautions to address these concerns, including the implementation of rules to limit harmful use of the tool. These rules involve avoiding extreme violence, celebrity likeness, and hateful imagery in the generated videos. Additionally, OpenAI is working with experts to test the model adversarially in areas like misinformation, hateful content, and bias.
However, Tobac remains concerned that adversaries could find ways to circumvent these rules. She called on OpenAI to partner with social media platforms to automatically recognize and label AI-generated videos shared on these platforms, as well as to establish guidelines for labeling such content.
As of now, OpenAI has not responded to requests for comment on these concerns.
Gordon Crovitz, co-chief of NewsGuard, a company specializing in tracking misinformation, expressed his apprehension about the potential for the tool to spread false narratives and disinformation on an unprecedented scale. He believes that the tool could act as AI agents contributing to the proliferation of disinformation.
The emergence of AI tools like Sora raises important questions about the responsible development and use of such technologies. While they have the potential for various positive applications, including in creative industries and entertainment, safeguarding against misuse and manipulation is crucial to ensuring a trustworthy digital landscape.
In conclusion, OpenAI’s new AI tool, Sora, has sparked concerns due to its ability to generate surreal videos from text prompts. The potential misuse of this tool, particularly during elections, raises concerns about the spread of misinformation and manipulation. While OpenAI is taking safety precautions, experts emphasize the need for additional measures to address the risks associated with AI-generated content in order to protect the public and preserve the integrity of information disseminated online.