In an escalating issue surrounding manipulated content targeting the Indian elections, major tech firms Meta and OpenAI have taken significant actions to combat the spread of misinformation. Both companies released reports detailing their efforts this week, shedding light on the covert influence campaigns aimed at swaying public opinion, particularly regarding India’s general elections.
OpenAI, a US-based artificial intelligence company known for ChatGPT, disclosed its intervention in a covert influence campaign orchestrated by an Israeli firm. This campaign utilized OpenAI’s AI model to create fake social media personas and generate content related to the Indian elections. The content included anti-Bharatiya Janata Party (BJP) material, which was disseminated across various social media platforms.
On the other hand, Meta, the parent company of Facebook, Instagram, and WhatsApp, announced the removal of numerous accounts, pages, and groups for violations of its policy against coordinated inauthentic behavior. These accounts, originating from China, specifically targeted the Sikh community, not only in India but also in other countries around the world.
OpenAI’s report highlighted the disruption of a network operated by STOIC, an Israeli political campaign management firm. The company banned accounts that were creating and editing content for an influence operation spanning multiple social media platforms. This operation initially focused on generating content related to the Gaza conflict before shifting its attention to the Indian elections, criticizing the ruling BJP party and praising the opposition Congress party.
In response to OpenAI’s findings, the BJP expressed concerns about the threat posed by such influence operations, emphasizing the need for transparency and public awareness. Minister of State for Electronics and IT, Rajeev Chandrasekhar, denounced the foreign interference in Indian politics orchestrated by certain political parties through misinformation campaigns.
Meta’s ‘Adversarial Threat Report’ revealed the removal of accounts linked to a network from China targeting the global Sikh community. This network engaged in coordinated inauthentic behavior, promoting a fictitious activist movement called Operation K. The posts by these accounts focused on a range of topics, including the Khalistan independence movement, the Sikh community worldwide, and criticisms of the Indian government.
Additionally, Meta confirmed the removal of accounts associated with the STOIC network, issuing a cease-and-desist letter to the firm. The company emphasized its commitment to combating misinformation and foreign interference on its platforms, underscoring the importance of maintaining the integrity of online discourse.
These reports from OpenAI and Meta underscore the ongoing challenges posed by manipulated content in the digital sphere, particularly concerning sensitive political issues like the Indian elections and the Sikh community. The actions taken by these tech firms signal a proactive approach to safeguarding their platforms against malicious actors seeking to influence public opinion for strategic purposes.