OpenAI CEO, Sam Altman, has described artificial intelligence (AI) as the most important step yet for humans and technology. Speaking about the dangers of rapidly developing AI technology, he admitted that there are many ways it could go wrong. However, Altman added that he believes global regulation could address big risks, without being overdone. The CEO of OpenAI, valued at over $27bn, says he is interested in the potential benefits of the technology, rather than financial gain. The startup’s products, including the chatbot ChatGPT and image generator Dall-E, have sparked a multibillion-dollar frenzy among venture capitalist investors and entrepreneurs
KEkompanying the AI-driven technology.
OpenAI, which is at the forefront of generative AI technology, offers companies access to the application programming interfaces needed to create their own AI models, generating revenue for the business. Microsoft, which has invested $13bn in OpenAI, has played a role in using the Azure cloud network to train and run OpenAI’s models.
Altman’s involvement in the AI field has seen him offer support for mitigating the risk of extinction from AI, encouraging efforts to reduce issues like algorithmic bias and racism. Despite warnings from technology leaders, some AI researchers argue that AI is not advanced enough to justify fears of it destroying humanity, and that focusing on doomsday scenarios is a distraction from other, more pressing issues.
AI is becoming increasingly powerful and quicker developments have caused governments and watchdogs to assess potential risks and develop regulation. Major AI companies, like Microsoft and Alphabet’s Google, have committed to participating in independent public evaluations of their systems, while the US Department of Commerce has considered rules that could require AI models to go through a certification process before release.