Two of the most influential tech companies in the artificial intelligence (AI) space are making a plea to governments across the globe, calling for the regulation of AI forces they’ve unleashed. OpenAI’s Sam Altman and Google’s Sundar Pichai both agree that A.I. is so powerful and quickly-advancing that governments should take proactive action to manage the potential risks associated with it.
In a blogpost, Altman explained that proper risk management would allow us to “have a dramatically more prosperous future.” Pichai echoed the same sentiment in the Financial Times, writing that while A.I. should be regulated, the regulations should “balance innovation and potential harms.” In Pichai’s view, several parties must come together for devising the rules framework—governments, industry experts, academics and the public. Both leaders proposed that countries should discuss and collaborate on A.I. regulation.
OpenAI and Google are at the forefront of developing products and tools powered by artificial intelligence, with OpenAI launching the buzzworthy ChatGPT chatbot and Google being the parent company of several innovative A.I. products like Google Home.
Altman sees the need for an international authority—similar to the IAEA, which monitors the use of nuclear power—to inspect A.I. systems, audit progress, ensure safety standards are met, and that A.I. deployments have adequate security. Meanwhile, Pichai urged countries to collaborate on regulations, with the US and Europe leading the way.
Both Pichai and Altman agree that A.I.’s potential for progress is nearly immeasurable, but they recognize the need to regulate it carefully. Former Google CEO Eric Schmidt voiced a similar opinion, but warned against restrictive regulations which may stifle innovation. Ultimately, what remains clear is that both men are in agreement that A.I. is one of the most transformative technologies of our time, and governments must play its part in regulating its development.