On the 22nd of May 2023, three top-level executives from OpenAI—Sam Altman, Greg Brockman, and Ilya Sutskever—wrote a blog post discussing the potential of AI systems to outdo human capacity within the next ten years. OpenAI’s chatbot software, ChatGPT, is the brainchild of Sam Altman and the company is working on some of the most advanced AI technology.
The blog post examined the implications of “superintelligence” and highlighted the need for rules and regulations within this field. It was suggested that the current AI systems still need to be safely managed and guided, and the special treatments of superintelligence requires focused attention. Drawing on the examples of nuclear energy and synthetic biology, they explained how superintelligence poses a massive risk and has the potential to be highly destructive. This mandates an international authority that can assess the risks, enforce necessary safety measures, and uphold the regulations.
Altman wants to ensure that development of AI doesn’t come to a standstill due to the risk. He strongly emphasized two main arguments: the greater good that AI can bring and the likelihood that it would be near-impossible to suppress such a powerful technology. He claimed that AI development should be regulated and should abide by an agreed-upon speed. To achieve the goals of creating a more vibrant society, abundant with creative opportunities, and ultimately preventing the worst-case scenarios, Altman believes it is of utmost importance to have good and thorough practice of AI governance, safety research and education.
Sam Altman is a prominent figure on the AI technology scene and has gained a lot of attention as the founder of OpenAI. He is a strong advocate for AI development and promotes the idea of tech that can positively contribute to the greater good. Altman has also appeared before the United States Congress, where he seemed to be a strong voice for mitigating AI risks with regulation and potential control.