IBM Urges Legal Liability for Deepfakes Ahead of Global Elections

Date:

Big Tech Companies Face Calls for Legal Liability Over Deepfakes

IBM’s top policy official, Christopher Padilla, has called for Big Tech companies like Google and Meta to be held legally responsible for the spread of deepfake content. Padilla emphasized the importance of not only holding the creators of deepfakes accountable but also the platforms that host and distribute such material.

During a recent visit to Seoul, Padilla highlighted the need for legal liability measures against those who post deepfake content and the platforms that fail to promptly remove it. IBM’s stance aligns with the company’s commitment to ethical AI practices and transparency in the use of artificial intelligence technology.

The European Union’s recent adoption of the Artificial Intelligence Act, which IBM publicly supported, lays the groundwork for regulating AI technologies and promoting transparency among companies providing AI services. IBM praised the EU’s risk-based approach, which categorizes AI systems based on their level of risk and potential impact, particularly in critical sectors like education, law enforcement, and elections.

While IBM executives endorsed the EU’s regulatory framework, competitors such as Microsoft, Meta, and Google did not release formal statements supporting the legislation. IBM’s executives emphasized the importance of differentiating between low-risk AI applications, like restaurant recommendations, and high-risk uses in areas such as healthcare and finance that require more oversight.

By leveraging open source models like Watsonx, IBM aims to provide clients with transparent AI tools that undergo rigorous scrutiny and feedback from experts. The company’s commitment to open sourcing its AI models reflects its dedication to ethical practices and reducing risks associated with AI technologies.

See also  Google Launches Free Courses on Generative AI: Master AI Skills Now!

As the debate over deepfakes and AI regulation continues, IBM’s advocacy for legal liability measures signals a growing demand for accountability within the tech industry. By supporting the EU’s AI legislation and promoting transparency in AI development, IBM sets a precedent for responsible AI governance that prioritizes user trust and integrity in the digital landscape.

Frequently Asked Questions (FAQs) Related to the Above News

What are deepfakes and why are they a concern?

Deepfakes are manipulated videos or images that use artificial intelligence technology to create realistic but false content. They are a concern because they can be used to spread misinformation, harm individuals' reputations, and undermine trust in media and information.

Why is IBM calling for legal liability for deepfakes?

IBM believes that holding Big Tech companies accountable for the spread of deepfake content is essential in combatting the negative effects of this technology. By addressing legal liability, IBM aims to promote ethical practices and transparency in the use of AI technology.

How does the European Union's Artificial Intelligence Act contribute to regulating AI technologies?

The EU's Artificial Intelligence Act provides a regulatory framework for categorizing and overseeing AI systems based on their risk and potential impact. This legislation aims to promote transparency, accountability, and responsible AI governance across various sectors, including critical areas like elections and healthcare.

What is IBM's approach to AI governance and transparency?

IBM is committed to ethical AI practices and transparency in AI development. The company advocates for legal liability measures for deepfakes, supports the EU's AI legislation, and promotes the use of open source models like Watsonx to provide clients with transparent and accountable AI tools.

How does IBM differentiate between low-risk and high-risk AI applications?

IBM distinguishes between low-risk AI applications, such as restaurant recommendations, and high-risk uses in sectors like healthcare and finance that require more oversight. By promoting transparency and accountability in AI development, IBM aims to reduce risks and build user trust in the digital landscape.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.