New Regulatory Agency Needed to Address Risks of AI Technology

Date:

As the technology of Artificial Intelligence (AI) grows in capabilities, so do the risks. To address this, some experts are calling for the creation of a new regulatory agency to oversee the use of the technology. Sam Altman, the CEO of OpenAI, has proposed creating an agency that could grant licenses to AI platforms, set operating standards, and enforce rules. These “guardrails,” as Rep. Anna Eshoo (D, California) calls them, are necessary as AI can be powerful and dangerous if abused or misused.

However, ideas of establishing the new agency have faced resistance. Eric Schmidt, the former CEO of Google, worries that it will create an extra layer of bureaucracy while the Computer & Communications Industry Association, a trade organization, has warned against it. Similarly, Republican lawmakers, such as Sen. Josh Hawley of Missouri, fear that the agency could become “captured” by the interests it is meant to regulate.

Despite pushback, Senator Michael Bennet of Colorado is introducing a bill creating a five-member Federal Digital Platform Commission that would develop internet platform codes of conduct with industry input. Altman further recommends that the government must encourage other countries to do the same, pointing to the International Atomic Energy Agency as an example of a global body successfully promoting nuclear safety. Senate Majority Leader Chuck Schumer (D, New York) is also leading discussions on a bipartisan AI bill.

The Biden administration has said it will apply existing laws to AI in areas including lending, employment decisions, fraud prevention, and competition. The U.S. Copyright Office has also launched a review of existing concerns and tech companies such as Google, Microsoft, and OpenAI are constantly updating their products to ensure safety. However, Alondra Nelson, a former Biden White House official, suggested the administrators can take action using the federal government’s procurement authority to set standards for the AI systems used.

See also  New AI Technology by Jeep Will Facilitate Off-Road Driving

OpenAI is a technology company dedicated to creating beneficial AI, and it has developed a chatbot called ChatGPT. The system is designed to learn from user conversations in order to generate responses that are appropriate to hold a conversation. OpenAI is committed to safety and completeness of the chatbot and taking proactive measures to ensure its use is ethical.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Sentient Secures $85M Funding to Disrupt AI Development

Sentient disrupts AI development with $85M funding boost from Polygon's AggLayer, Founders Fund, and more. Revolutionizing open AGI platform.

Iconic Stars’ Voices Revived in AI Reader App Partnership

Experience the iconic voices of Hollywood legends like Judy Garland and James Dean revived in the AI-powered Reader app partnership by ElevenLabs.

Google Researchers Warn: Generative AI Floods Internet with Fake Content, Impacting Public Perception

Google researchers warn of generative AI flooding the internet with fake content, impacting public perception. Stay vigilant and discerning!

OpenAI Reacts Swiftly: ChatGPT Security Flaw Fixed

OpenAI swiftly addresses security flaw in ChatGPT for Mac, updating encryption to protect user conversations. Stay informed and prioritize data privacy.