Tesla, SpaceX and Twitter CEO Elon Musk recently talked about his plans to develop an Artificial Intelligence (AI) project called “TruthGPT” to rival existing AI businesses such as OpenAI, DeepMind and others during a taped interview on Fox News Channel’s “Tucker Carlson Tonight”. Musk believes these businesses are training their systems to be too “politically correct” and will not benefit humanity. He suggested that regulations should be created, similar to those in the aviation industry, to ensure the future progress of advanced AI is beneficial to mankind. To achieve this, Musk envisions creating a regulating agency that would take insight from the AI industry and protect its users from “untruths”.
Elon Musk has a proven track record of success as the billionaire CEO of Tesla, SpaceX and Twitter. In October 2022, he made a $44 billion deal to purchase the social media giant, Twitter. On the show, Musk also discussed how law enforcement agencies are given access to Twitter; something he hopes to change by allowing users to encrypt their direct messages and conversations in the future.
Sam Altman’s OpenAI and Google’s DeepMind are just some of the AI initiatives Musk will compete with in this space. Prior to his interview with Carlson, the celebrity CEO signed a letter calling for a pause on advanced AI research as he and others believe it can harm society. He hopes to create a system with TruthGPT that will go against this rule: an AI that will tirelessly work to understand the universe in an effort to be beneficial to humanity.
Musk discussed his ambitions to use Twitter to “significantly impact future domestic and international elections”. As of now, the social media platform only runs with 20% of the employees it once had – a factor that allowed Musk to offer the encryption service in the future.
By creating a TruthGPT AI system that pursues understanding and not exaggeration, Musk hopes to build a regulatory agency that focuses on creating rules that will protect and benefit humanity. The goal is to create a technology that works against “algorithmic amplification”, something Stanford researchers previously found to be right-leaning.