The leaders of OpenAI, a leading developer of ChatGPT programs, are calling for the regulation of Artificial Intelligence (AI) with potential apocalyptic consequences. This includes the proposal of an international regulatory agency similar to the International Atomic Energy Agency. Co-founders Greg Brockman and Ilya Sutskever, and CEO Sam Altman, have argued for restrictions on the usage and levels of security of AI systems, to reduce the potential risk that they pose.
Leaders of this groundbreaking technology have voiced their fear of advancing so quickly that AI could pose an existential risk to humanity. OpenAI’s statement warns of AI capabilities potentially exceeding the expert level in a myriad of domains in the next ten years — with an unprecedented level of potential outcomes, the need for risk management is crucial. To that end, OpenAI is operating the Center for AI Safety (CAIS) which has identified eight categories of catastrophic and existential risks that could arise from artificial intelligence developments.
Working to ensure safety and prevent a worst-case scenario, OpenAI hopes to form some collective agreement to limit the growth of AI in the short term. This could involve a government-proposed plan, or perhaps an international surveilling protocol. CEO Sam Altman has even invented an eye-scanning Orb that grants access to crypto-cash in exchange for bio-data.
From the immense economic and educational potential of AI development, to the potential dangerous application of the technology, the complications of AI boom is daunting. OpenAI’s plea for an international body of administrators to manage the risk of AI progress is salient to the world’s future — the impact of failing to do so could be catastrophic.