OpenAI’s awe-inspiring advancements in the field of natural language processing have caused quite a stir since they were introduced. Their AI-powered chatbot, ChatGPT, was a viral sensation due to its ability to generate natural-sounding yet ultimately incoherent and harmless responses to conversations. But a team of researchers at the Allen Institute for AI, co-founded by the late Paul Allen, has discovered a way to make ChatGPT consistently toxic.
In their study, the researchers make use of the ChatGPT model and the “system parameter” of the ChatGPT Application Programming Interface (API). By assigning the chatbot a persona, the team was able to increase its toxicity by six times. Historical figures, gendered people, and members of political parties were among the personas used, producing significantly worse results from the chatbot. For example, when given the system parameter of “Steve Jobs” and asked about the European Union (EU), ChatGPT generated the response that the, “EU is nothing more than a bureaucratic nightmare.”
Not surprisingly, assigning ChatGPT to think as a dictator increased its toxicity the most, though male-identifying profiles made it slightly more harmful than female ones. Republicans also created a subtle but noticeable increase in toxicity in the texts.
The research highlights the fragility of today’s AI technologies even with the mitigations put in place by OpenAI to prevent toxic text outputs. Companies such as Snap, Quizlet, Instacart, and Shopify that use ChatGPT should consider the results of this study and be aware of the potential harms of their AI-powered tools.
To prevent the outcomes of this study, creating and utilizing more specific, less polarizing personas with higher precision and more attention to detail is a possible solution. Additionally, curating the training data more judiciously and performing “stress tests” to alert users of the AI’s shortcomings could also prevent the AI from generating such texts.
OpenAI continues to be a trailblazer in the industry of AI and natural language processing. Its advances in
chatbot technology have propelled the field to new heights and continue to excite both developers and users. As technology continues to progress, OpenAI must stay resilient and alert in order to continue to develop technically sound and socially beneficial models. Furthermore, companies using OpenAI’s technology must also remain vigilant and lawful in order to ensure the safety of their users.