OpenAI came under scrutiny this week due to two significant security issues that have raised concerns about the safety of its platforms.
The first security flaw was discovered in the ChatGPT application, where conversations were being stored in plain text on the Mac version, making them vulnerable to hackers. This flaw was brought to light by engineer Pedro José Pereira Vieito on Twitter, highlighting the risk of unauthorized access to user conversations. OpenAI quickly addressed this issue by encrypting the conversations to ensure user privacy and security.
In a separate incident, OpenAI faced a hacker attack that allowed the offenders to access internal messaging systems, potentially compromising sensitive information about the company’s technologies. Despite being aware of this breach since April, OpenAI did not publicly disclose the incident or involve law enforcement agencies in the investigation. The hacker was not believed to have ties to any foreign government, but the breach raised concerns about the company’s ability to protect its intellectual property from malicious actors.
Following the security breach, Leopold Aschenbrenner, a former technical program manager at OpenAI, expressed concerns to the board of directors about the company’s vulnerability to foreign adversaries seeking to exploit its technologies. Aschenbrenner’s dismissal shortly after raising these concerns sparked speculation about the company’s handling of internal security threats and information leaks.
These security incidents have come at a time when OpenAI is expanding its presence in the AI industry, with support from major investors like Microsoft and partnerships with media companies to enhance its Large Language Models. While the company has taken steps to address the security flaws and bolster its defenses, the recent breaches have underscored the importance of robust cyber defenses in safeguarding sensitive information in the digital age.