Big Tech Companies Face Calls for Legal Liability Over Deepfakes
IBM’s top policy official, Christopher Padilla, has called for Big Tech companies like Google and Meta to be held legally responsible for the spread of deepfake content. Padilla emphasized the importance of not only holding the creators of deepfakes accountable but also the platforms that host and distribute such material.
During a recent visit to Seoul, Padilla highlighted the need for legal liability measures against those who post deepfake content and the platforms that fail to promptly remove it. IBM’s stance aligns with the company’s commitment to ethical AI practices and transparency in the use of artificial intelligence technology.
The European Union’s recent adoption of the Artificial Intelligence Act, which IBM publicly supported, lays the groundwork for regulating AI technologies and promoting transparency among companies providing AI services. IBM praised the EU’s risk-based approach, which categorizes AI systems based on their level of risk and potential impact, particularly in critical sectors like education, law enforcement, and elections.
While IBM executives endorsed the EU’s regulatory framework, competitors such as Microsoft, Meta, and Google did not release formal statements supporting the legislation. IBM’s executives emphasized the importance of differentiating between low-risk AI applications, like restaurant recommendations, and high-risk uses in areas such as healthcare and finance that require more oversight.
By leveraging open source models like Watsonx, IBM aims to provide clients with transparent AI tools that undergo rigorous scrutiny and feedback from experts. The company’s commitment to open sourcing its AI models reflects its dedication to ethical practices and reducing risks associated with AI technologies.
As the debate over deepfakes and AI regulation continues, IBM’s advocacy for legal liability measures signals a growing demand for accountability within the tech industry. By supporting the EU’s AI legislation and promoting transparency in AI development, IBM sets a precedent for responsible AI governance that prioritizes user trust and integrity in the digital landscape.