New Hampshire primary voters were targeted by AI-generated robocalls that impersonated President Biden, advising them not to vote and save their vote for the November election. The calls prompted concerns about election disruption and raised questions about the use of AI-altered media to undermine future elections. The New Hampshire Attorney General’s office is investigating the robocalls as an apparent unlawful attempt to suppress voters and disrupt the primary.
Lawmakers and advocates have called for federal action to address the potential harm posed by AI-generated content, particularly deepfakes. Some bills have been introduced to tackle these concerns, but it remains uncertain whether the Federal Election Commission will explicitly apply existing rules regarding fraudulent misrepresentation to AI-generated content.
The fake Biden robocall is just one example of AI-generated clips sparking fears and debate on the 2024 campaign trail. Last year, the Republican National Committee released an ad featuring Biden’s voice generated with AI, while a super PAC backed by Silicon Valley insiders utilized an AI-driven chatbot to promote a Democratic presidential candidate. Some states have taken steps to address AI-generated campaign content, but advocates argue that federal regulations are needed.
In other news, Meta (formerly Facebook) announced that it will allow European Union users to unbundle their accounts across various services, complying with the EU’s new competition rules. Users will have the option to unlink their Facebook and Instagram accounts, among other changes, ahead of the Digital Markets Act rules that require major tech companies to make their products interoperable with other services.
Concerns about law enforcement’s use of facial recognition technology have been highlighted by a case of violent prison abuse in which a faulty facial recognition match led to a false armed robbery accusation. The victim claims to have been sexually assaulted and beaten in jail, despite being incarcerated elsewhere at the time of the alleged crime. Accuracy issues with facial recognition technology underscore the potential dangers associated with its misuse.
Apple has paid a $12.3 million fine in Russia after its app store practices were found to violate antitrust laws. The Russian Federal Antimonopoly Service ruled that Apple had prohibited app developers from informing users about alternative payment options outside of its app store. The payment will contribute to the Russian government’s funds.
Finally, the Securities and Exchange Commission (SEC) has determined that a SIM swap attack was responsible for a recent social media hack. Growing online misinformation groups known as truthers are claiming that a Hamas massacre was a false flag operation, and concerns have been raised about AI’s destabilizing impact on the concept of truth in the upcoming 2024 election.