OpenAI, the company responsible for developing the ChatGPT chatbot, faced a security breach in 2023, as reported by the New York Times. Hackers managed to infiltrate the internal messaging system of Microsoft-backed OpenAI, obtaining information about their artificial intelligence (AI) technologies.
The compromised data reportedly originated from an online forum where OpenAI employees discussed the company’s latest AI advancements. Fortunately, the hackers did not access the core systems housing ChatGPT and other AI models.
Following the breach, OpenAI executives chose to keep the incident confidential after informing employees and the board. The decision was based on the fact that no customer or partner information was compromised, and they believed the hacker was acting alone without ties to a foreign government.
This breach raises concerns about AI technology’s safety and potential misuse. OpenAI recently thwarted covert operations utilizing their AI models for deceptive purposes online. However, they disbanded their long-term AI risk research team despite industry-wide acknowledgement of the importance of ethical AI development.
In response to rising AI competition, the Biden administration contemplates safeguards to protect US AI technology from countries like China and Russia. These measures could include restricting access to advanced AI models such as ChatGPT.
The industry is witnessing a shift towards responsible AI development, with 16 major AI companies committing to ethical practices. These firms understand the need for ethical considerations in AI advancements.
The TOI Tech Desk delivers the latest tech news, covering gadget launches, in-depth analyses, and cybersecurity updates. Stay informed with the accurate and authentic news coverage offered by the TOI Tech Desk.