The past four months have seen a massive surge in the popularity of AI chatbots, to the extent that some of them can carry on conversations and write sophisticated term papers. Although it appears that these AI bots are thinking, they are actually limited to mimicking speech. This is because these artificially intelligent programs are made to process a whole lot of data provided by web sources. To understand the data used to make AI chatbots talk, The Washington Post set out to analyze one of these datasets, known as C4.
Big tech companies are incredibly secretive when it comes to programming their AI chatbots. To protect users from offensive and inappropriate content, they use the filter known as List of Dirty, Naughty, Obscene, and Otherwise Bad Words, this list includes 402 English words as well as an emoji. While the filter is supposed to remove any racial slurs and obscenities, sometimes LGBT content also gets removed. Additionally, the filters failed to remove some distressing content, such as anti-trans and anti-government websites. Even the popular QAnon phenomenon and “pizzagate” conspiracy theories were present in the C4 dataset.
Generally, the data used to train AI chatbots is a sample of websites from a particular period of time. This scrape was performed in April 2019 by the nonprofit CommonCrawl and companies use this data to fine-tune models and protect users from unwanted content. However, as research has shown, a lot can still get through. It has been found that there were hundreds of pornographic websites and more than 72,000 instances of “swastika.”
AI chatbot models such as GPT-3 consume an overwhelming amount of data. For example, GPT-3’s training data includes all of English language Wikipedia, novels written by unpublished authors and text from Reddit links highly rated by users.
Though companies don’t typically disclose what their AI chatbot is consuming, their potential to use private, copyrighted and offensive content underscores the need for transparency in this field. With recent regulation changes and on-going efforts to make sure tech companies are being held accountable for their AI bots, users can also rest assured that their personal data is protected from malicious sources.