NewsGuard, an online news rating group published a report on Monday about the presence of AI chatbot-generated websites that have flooded the online world. These chatbots, such as OpenAI’s ChatGPT and Alphabet Inc’s Google Bard, are capable of creating detailed texts from simple user prompts. This could potentially lead to the amplification of fraud techniques as these websites, which often have generic names like News Live 79 and Daily Business Post, do not disclose the fact that their content was generated by AI.
Another independent review conducted by Bloomberg proved that some of these AI-generated websites have been involved in pernicious activities such as publishing fake obituaries, fabricated news, and even spreading a conspiracy theory about mass deaths caused by vaccines. Furthermore, NewsGuard highlights that a few of these sites were monetizing their content through advertising “guest posting,” where people are able to pay for mentions of their business on these websites and boosting their search ranking. This has become even cheaper and more accessible with the ever-advancing technology, and this causes a lot of concern since there is no cost to the perpetrators.
The use of AI in content generation is providing a big challenge for companies such as Google, whose technology generates revenue for half of these sites, and whose AI chatbot Bard may have been used by some of them. When asked by Bloomberg whether these AI-generated websites breaches their advertising policies, Google spokesperson Michael Aciman responded that the company doesn’t consider these sites to be inherently in violation, but will take action when required. Upon Bloomberg’s inquiry, the tech giant took down ads from individual pages in some sites and removed ads from websites where violations were found.
According to Scott Crovitz, co-CEO of NewsGuard and former publisher of Wall Street Journal, using AI chatbot models to produce websites that resemble news outlets is a form of fraud disguised as journalism and companies such as OpenAI and Google should be taking necessary measures to prevent the misuse of their models. OpenAI’s response stated that they are using both automated and human reviewers to detect and prevent the misuse of their model and would issue warnings or punish users in more serious cases.
Noah Giansiracusa, an associate professor of data science and mathematics at Bentley University, believes that with advancements in AI and automation, it’s become easier and faster to intentionally create low-quality content that could breach Google’s policies and harm its advertising ecosystem. It’s important for companies such as Google to take necessary steps to regulate the use of AI-generated content and prevent it from being used to manipulate search result rankings and generate low-quality content.