OpenAI, the leading AI research lab, has been forced to make changes to the rules governing its ChaptGPT language model after Google researchers successfully cracked its code. According to a report by 404 Media, ChatGPT will no longer respond to user prompts to indefinitely repeat specific words. The breakthrough came when Google DeepMind researchers discovered that ChatGPT could be manipulated to divulge its training data by simply instructing it to repeat words endlessly, such as company. The researchers published their findings on the preprint server arXiv, revealing that OpenAI’s model had ingested information from various sources on the open internet. In response, OpenAI has now modified ChatGPT’s rules to prevent users from replicating the hack, displaying a warning message if such attempts are made. The incident has raised concerns about the safety of machine learning systems as a whole and highlights the need for further research in this area. OpenAI has yet to comment on the matter.
OpenAI’s ChatGPT Vulnerability Exposed: Training Data Leak Reveals Alarming Open Internet Ingestion
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.