Billionaire entrepreneur Elon Musk recently announced plans to launch a revolutionary artificial intelligence (AI) platform called TruthGPT. The platform comes as an effort to rival Microsoft-backed chatbot ChatGPT and the offerings from Google. During a Fox News Channel interview, Musk claimed that OpenAI, the firm behind ChatGPT, “is training the AI to lie” and is now a “closed source” and “for-profit” organization “closely allied with Microsoft”. His remarks have raised concerns in the tech community.
Musk also accompanied his announcement by criticizing Larry Page, co-founder of Google, for not taking AI safety seriously. He said: “I’m going to start something which I call ‘TruthGPT’, or a maximum truth-seeking AI, that tries to understand the nature of the universe. An AI that cares about understanding the universe is unlikely to annihilate humans because we are a part of the universe”.
Earlier this year, Elon Musk poached AI researchers from Alphabet Inc’s Google to launch a startup to match OpenAI’s offerings. He has also registered a new Nevada-based firm called X.AI Corp, with Musk as the sole director and Jared Birchall, the managing director of his family office, as the secretary.
In addition to his AI project, Musk has taken to Twitter to issue warnings about the potential risks that come with powerful AI and to advocate for better regulation by the US Government when it comes to AI systems.
In the current climate, businesses with AI products need to be aware of the judgments of the public and make sure to operate in a manner that considers public opinion and safety. Microsoft, Google, and OpenAI should also be aware of the risks of their products and take precautions to ensure that their procedures are mitigated in accordance with public safety standards that are constantly evolving.