Tech CEOs Call for US AI Referee to Ensure Safe Use
Tech CEOs including Elon Musk, Mark Zuckerberg, Sundar Pichai, and others met with lawmakers at Capitol Hill to discuss the regulation of artificial intelligence (AI). The lawmakers are concerned about the potential dangers of this rapidly growing technology, especially in light of OpenAI’s ChatGPT chatbot. In response to these concerns, Musk called for a referee to oversee the safe use of AI, comparing it to the role of a referee in sports.
Musk emphasized the importance of having a regulator to ensure that AI companies take actions that are safe and in the interest of the general public. He described the meeting as a service to humanity and suggested that it could be a historic moment for the future of civilization. In agreement with Musk, Zuckerberg stated that Congress should engage with AI to support innovation while also implementing safeguards. He argued that American companies should set the standards for AI in collaboration with the government.
Over 60 senators participated in the discussion, and there was a unanimous agreement among them that government regulation of AI is necessary. Democratic Senate Majority Leader Chuck Schumer, who organized the forum, expressed optimism about the progress made during the meeting but acknowledged that there is still a long way to go. Republican Senator Todd Young suggested that the Senate is approaching a point where relevant committees will begin considering legislation, while Senator Mike Rounds cautioned that it will take time for Congress to act.
Lawmakers at the meeting expressed their concerns regarding deep fakes, including fabricated videos, election interference, and attacks on critical infrastructure. They emphasized the need for safeguards against such threats. Several prominent figures from the tech industry and labor unions also attended the meeting, including CEOs from Nvidia, Microsoft, and IBM, as well as former Microsoft CEO Bill Gates and AFL-CIO labor federation President Liz Shuler.
Schumer highlighted the urgency of implementing regulations ahead of the 2024 U.S. general election, specifically with regards to deep fakes. The potential risks associated with the development of more advanced AI systems led Musk and a group of AI experts to call for a six-month pause earlier this year. Regulators worldwide have been working to establish rules governing the use of generative AI, which can create text and images that are virtually undetectable as artificial.
In related news, Adobe, IBM, Nvidia, and other companies signed President Joe Biden’s voluntary AI commitments, which require steps such as watermarking AI-generated content to mitigate misuse. The commitments, announced earlier this year, aim to ensure that the power of AI is not exploited for destructive purposes. Google, OpenAI, and Microsoft were among the companies that signed the commitments in July. The White House is also working on an AI executive order.
It is clear that there is a growing consensus among tech CEOs and lawmakers about the need for government regulation of AI. As this emerging technology continues to advance, there is a recognition of the importance of striking a balance between innovation and safeguarding the public interest. The meeting at Capitol Hill has provided a platform for discussions on AI regulation, and while progress has been made, further work and deliberation will be necessary to develop effective policies that address the challenges and potential risks associated with AI.