Expert Urges Policymakers to Focus on Real-World Harms of AI, Not Just Apocalypse
Janet Haven, the executive director of nonprofit Data & Society, has advised policymakers to prioritize the real-world harms of artificial intelligence (AI) rather than getting caught up in the fear of an apocalyptic scenario. While she acknowledges the potential long-term existential threats of AI, Haven believes that immediate and visible harms associated with AI adoption should not be overlooked.
Haven, who is also a member of the White House’s National Artificial Intelligence Advisory Committee, emphasizes the need for policymakers to address the social implications of AI technologies. She asserts that protecting fundamental rights and ensuring the durability of AI governance frameworks should be a top priority. Particularly, as AI is increasingly used in areas such as social security benefits and housing allocations through complex machine learning algorithms, it is crucial to consider the potential risks to individuals and communities.
The debate surrounding AI’s impact on society and the economy has prompted differing viewpoints among policymakers. While some technology advocates stress the importance of mitigating AI’s potential risks in the distant future, human rights activists argue for immediate attention to the harmful biases present in AI algorithms that negatively affect people today.
Haven suggests that policymakers refer to the White House’s Blueprint for an AI Bill of Rights, which aims to provide regulatory guidance for federal agencies overseeing AI development. The proposed measures include data privacy protections and efforts to eliminate algorithmic bias. Haven also urges policymakers to broaden their understanding of AI beyond generative AI (associated with technologies like ChatGPT) and not fall victim to extensive corporate lobbying. Instead, they should focus on the diverse domains of AI and the real-world impacts it has on society.
However, Haven highlights that policymakers have largely concentrated on worst-case scenarios and the potential dangers of AI, influenced by concerns over national security and competition with China. She believes this narrow focus has overshadowed the immediate and tangible harms caused by AI technologies and limits the effectiveness of governance frameworks.
In a related matter, Representative Sean Casten (D-Ill.) has taken a long-term approach to stablecoin regulation. He voted against a stablecoin bill proposed by Republicans in the House Financial Services Committee, aiming to introduce bipartisan provisions that ensure stablecoin regulations are inclusive and consistent. Casten has sought clarification from PayPal regarding its stablecoin offerings and expresses concerns about the lack of uniformity among members of Congress.
In conclusion, while the public’s attention has been drawn to the potential risks of AI in the distant future, experts like Janet Haven stress the need to address the immediate social and ethical implications of AI technologies. Policymakers should pay attention to protecting fundamental rights, addressing algorithmic bias, and ensuring the durability of AI governance frameworks. By expanding their understanding beyond narrow conceptions of AI and avoiding the influence of corporate lobbying, policymakers can effectively address the real-world harms of AI and promote responsible AI development.