IBM Urges Legal Liability for Deepfakes Ahead of Global Elections

Date:

Big Tech Companies Face Calls for Legal Liability Over Deepfakes

IBM’s top policy official, Christopher Padilla, has called for Big Tech companies like Google and Meta to be held legally responsible for the spread of deepfake content. Padilla emphasized the importance of not only holding the creators of deepfakes accountable but also the platforms that host and distribute such material.

During a recent visit to Seoul, Padilla highlighted the need for legal liability measures against those who post deepfake content and the platforms that fail to promptly remove it. IBM’s stance aligns with the company’s commitment to ethical AI practices and transparency in the use of artificial intelligence technology.

The European Union’s recent adoption of the Artificial Intelligence Act, which IBM publicly supported, lays the groundwork for regulating AI technologies and promoting transparency among companies providing AI services. IBM praised the EU’s risk-based approach, which categorizes AI systems based on their level of risk and potential impact, particularly in critical sectors like education, law enforcement, and elections.

While IBM executives endorsed the EU’s regulatory framework, competitors such as Microsoft, Meta, and Google did not release formal statements supporting the legislation. IBM’s executives emphasized the importance of differentiating between low-risk AI applications, like restaurant recommendations, and high-risk uses in areas such as healthcare and finance that require more oversight.

By leveraging open source models like Watsonx, IBM aims to provide clients with transparent AI tools that undergo rigorous scrutiny and feedback from experts. The company’s commitment to open sourcing its AI models reflects its dedication to ethical practices and reducing risks associated with AI technologies.

See also  ABB Hosts Academic Study Event: Chatgpt

As the debate over deepfakes and AI regulation continues, IBM’s advocacy for legal liability measures signals a growing demand for accountability within the tech industry. By supporting the EU’s AI legislation and promoting transparency in AI development, IBM sets a precedent for responsible AI governance that prioritizes user trust and integrity in the digital landscape.

Frequently Asked Questions (FAQs) Related to the Above News

What are deepfakes and why are they a concern?

Deepfakes are manipulated videos or images that use artificial intelligence technology to create realistic but false content. They are a concern because they can be used to spread misinformation, harm individuals' reputations, and undermine trust in media and information.

Why is IBM calling for legal liability for deepfakes?

IBM believes that holding Big Tech companies accountable for the spread of deepfake content is essential in combatting the negative effects of this technology. By addressing legal liability, IBM aims to promote ethical practices and transparency in the use of AI technology.

How does the European Union's Artificial Intelligence Act contribute to regulating AI technologies?

The EU's Artificial Intelligence Act provides a regulatory framework for categorizing and overseeing AI systems based on their risk and potential impact. This legislation aims to promote transparency, accountability, and responsible AI governance across various sectors, including critical areas like elections and healthcare.

What is IBM's approach to AI governance and transparency?

IBM is committed to ethical AI practices and transparency in AI development. The company advocates for legal liability measures for deepfakes, supports the EU's AI legislation, and promotes the use of open source models like Watsonx to provide clients with transparent and accountable AI tools.

How does IBM differentiate between low-risk and high-risk AI applications?

IBM distinguishes between low-risk AI applications, such as restaurant recommendations, and high-risk uses in sectors like healthcare and finance that require more oversight. By promoting transparency and accountability in AI development, IBM aims to reduce risks and build user trust in the digital landscape.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.