Deepfakes, which are artificial images that falsely portray individuals, have become increasingly common with the advancement of artificial intelligence. However, the law has yet to catch up with this new technology, leaving victims wondering what actions they can take. While there are currently no federal regulations specifically addressing deepfakes, some existing legal principles can be applied.
According to Dr. Shomir Wilson, an assistant professor at Penn State University, although there is no national law against deepfakes, some states are attempting to regulate them. However, the law often lags behind technological advancements, which can hinder its effectiveness.
Despite the lack of targeted statutory remedies, victims of deepfake creators can still take legal action. Eric Ric Cohen, a partner at Cohen and Silver in Philadelphia, specializes in entertainment law and suggests that longstanding legal principles like invasion of privacy and defamation can be applied. Defamation occurs when someone makes a false statement or creates something false with malicious intent.
However, taking legal action can be challenging, especially if the victim is unable to pay an attorney’s hourly fee. In cases where the deepfake is posted on social media, individuals can report it to the platform, as these platforms take such matters seriously.
For celebrities and politicians, pursuing legal action can be even more difficult. Cohen refers to the case of New York Times Co. v. Sullivan, which established limitations on the ability of public officials to pursue defamation claims. In this case, the U.S. Supreme Court ruled that public officials must prove that a statement was made with actual malice or with knowledge of its falsehood.
AI-generated songs also present challenges for artists. While copyright laws may not apply to these songs, artists can rely on the legal standard of misappropriation of name and likeness to seek damages or have the song removed from platforms like Spotify.
Despite the novelty of deepfake technology, these legal principles have been used in similar instances involving traditional editing tools. Cohen believes that most people agree on the need for AI regulation, and he expects to see targeted legislation in the future.
Dr. Thiago Serra, an assistant professor at Bucknell University, cautions against overly broad regulations, as they could limit access to the technology for certain companies without benefiting others. He believes that treating AI as something to be taken away would do more harm than good.
In conclusion, while the law is still catching up with deepfake technology, victims can take legal action based on existing principles such as invasion of privacy and defamation. However, pursuing such cases can be challenging, and financial constraints may present additional hurdles. Reporting deepfakes to social media platforms can be a free and effective way to address the issue. As AI continues to advance, targeted legislation may be necessary to regulate its use. Balancing regulation while ensuring access to AI technology for all remains a challenge that requires careful consideration.