The rise of advanced artificial intelligence (AI) has people scared. From fears of losing jobs to the potential to manipulate outcomes of elections, generative AI can be a cause of major concern. The idea of seeing not being believing has become even more pertinent in the wake of myriad advancements in AI that are currently being explored and implemented in the real-world.
From making music to creating convincing images and videos, AI technology is advancing at a rapid rate. We have gone from creating highly specific AI tools, to creating conversational AI that can fool people into believing it is human. Recently, a Republican admaker pitched his services to a Senate candidate for the upcoming election. However, someone else had also pitched AI-assisted services to help replicate the Senator’s voice for creating ads, therefore leaving the admaker in the dust.
This is obviously not the worst thing AI could be used for, and is even relatively benign. However, this development further shows that the power and capability of AI is advancing to a point where it is becoming possible to use it in a much more malicious or manipulative manner. Major organizations, like the Republican National Committee, have already begun rolling out ads made entirely by AI, albeit with low clarity and quality.
The thought of nefarious groups and organizations taking advantage of such amazing capabilities to falsify information and manipulate the opinion of the people is a worrisome prospect. It has even been proposed that AI is being used by certain groups for what is essentially gain-of-function research on mankind. It also should be noted that this is not entirely speculative, as it is why NSA cybersecurity director Rob Joyce gave a warning to the public about AI usage, highlighting specifically how hacker-like AI is quickly becoming capable of producing natively written English.
The potential of AI to swing elections or manipulate public opinion is a scary one to think about. Generative AI technology is already at the point where it can generate content that is convincing enough to feel like the real deal. The days of being able to tell the difference between real and generated content may already be over, as the power of AI to create deepfakes is already being tapped into by nefarious actors.
OpenAI, the company responsible for making ChatGPT has been dubbed as dystopian dreamweavers, while also highlighting the absurdity of believing AI is completely under human control. The messiness AI can add to our elections and political landscape is vast, and therefore it is important to vigilantly be aware of those who may try to tap into this technology for their own agenda.
The Republican National Committee did mention that it would not use AI for unethical or deceptive capabilities, but this does not mean other nefarious entities are also adhering to the same rule. Making sure to be informed and aware of potential sources of generated content is key for not being manipulated by AI-generated content.