Title: Regulating AI for a Better Future: Silicon Valley Author Calls for Action
In a world where AI is rapidly advancing, Silicon Valley author, Tom Kemp, urges the need for regulation to harness its benefits while preventing potential harms. In his latest book, Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy, Kemp explores how regulating AI can protect consumers and ensure a safer and more accountable technology landscape. Here’s an excerpt from Kemp’s book, where he outlines his roadmap for containing AI and the importance of regulation.
Harnessing the Benefits, Limiting the Harms
Comparing the emergence of AI to Pandora in Greek mythology, Kemp emphasizes the need to keep its potential harms within boundaries. He emphasizes that while AI brings powerful gifts, it can also unleash plagues and evils if left unregulated. To confront AI bias, Dr. Timnit Gebru, the founder of the Distributed Artificial Intelligence Research Institute (DAIR), suggests the establishment of principles and standards, governing bodies, and algorithms’ thorough checking. She advocates for an FDA-like regulatory framework for AI.
The Need for Regulation
Kemp concurs with Gebru’s perspective and highlights the necessity of regulation for the AI industry. He suggests that AI, being a new game, needs rules and referees to ensure accountability and safety. Kemp proposes that the Federal Trade Commission (FTC), similar to the FDA’s role in drug approval, should assess the impact of AI systems in high-impact areas like housing, employment, and credit. These assessments would enhance transparency, accountability, and address concerns like digital redlining.
The Blueprint for an AI Bill of Rights
Kemp mentions the Biden Administration’s Office of Science and Technology Policy (OSTP) proposal for an AI Bill of Rights. This blueprint seeks to offer individuals the right to know when automated systems impact their lives and understand the reasons behind those outcomes. Incorporating this blueprint into AI regulation would ensure transparency and empower consumers to object to AI-based decisions. Additionally, Kemp suggests the introduction of nutrition label-like indicators on websites to distinguish AI-generated content from human-generated content.
Certifications and Codes of Conduct
To enhance trust and accountability in the AI industry, Kemp emphasizes the need for AI certifications, similar to the finance industry’s accreditation of certified public accountants (CPAs) and certified financial audits. He also calls for the development of industry standards and codes of conduct to guide AI usage. Notably, the International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) have already taken steps in this direction, with ISO developing a new AI risk management standard and NIST releasing an initial framework for AI risk management.
Diversity and Inclusivity
Kemp emphasizes the importance of diverse and inclusive design teams in building AI systems. By diversifying the pool of individuals working on AI projects, biases in AI systems can be mitigated. This aligns with Olga Russakovsky’s view that greater diversity leads to less biased AI systems.
Addressing Antitrust Issues
Recognizing the significance of AI in the tech industry, Kemp advises regulators to focus on AI-related antitrust issues. He suggests closer scrutiny of acquisitions of AI companies by Big Tech firms. Additionally, he proposes mandating open intellectual property for AI to prevent technological innovation from becoming concentrated solely in the hands of a few dominant companies.
Preparing for AI’s Impact on Workforce
Kemp stresses the importance of preparing our society and economy for the impact of AI on job displacement through automation. While acknowledging the need for better education and training, he also underscores the need for a balanced approach, as not everyone can become a software developer. The economist Joseph E. Stiglitz warns that unlike smaller-scale changes in technology and globalization, managing AI’s profound changes will require careful preparation to avoid polarization and weaken democracy.
A Safer and Positive AI Future
In conclusion, Kemp emphasizes the responsibility of Big Tech companies to ensure that AI’s effects are positive and not detrimental to society. He underlines the risks posed by the vast collection and processing of sensitive data and the potential exploitation of AI. To contain the risks associated with AI, a combination of regulation, inclusive design, antitrust oversight, and workforce preparation is necessary.
As the world forges ahead in the age of AI, it is crucial to strike a balance between harnessing AI’s benefits and safeguarding against its potential perils. Through responsible regulation and a holistic approach, we can ensure that AI becomes a force for good, benefiting society while protecting our civil rights, economy, and democracy.
Disclaimer: This article contains excerpts from the book Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy by Tom Kemp. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policies or positions of any entities mentioned.