OpenAI’s language model ChatGPT has introduced a new feature called ‘archive chats’ that allows users to store their conversations without cluttering the sidebar. OpenAI announced this update on its platform, stating that archived chats can be accessed through the Settings option. While currently available on the Web and iOS, Android support for this feature will be coming soon.
In addition to the new feature, OpenAI recently revealed a ‘Preparedness Framework’ document spanning 27 pages. This framework outlines OpenAI’s strategies for monitoring, assessing, predicting, and safeguarding against potential ‘catastrophic risks’ that may arise from powerful language models. OpenAI defines catastrophic risk as a threat capable of causing billions of dollars in economic damage or significant harm, including existential risks.
OpenAI’s preparedness team categorizes Frontier AI models under four broad areas: cybersecurity, CBRN (chemical, biological, radiological, nuclear threats), persuasion, and model autonomy. The company explains that only models scoring medium or below on risk assessments in these categories will proceed with development.
To enhance safety measures, OpenAI is establishing a cross-functional safety advisory group responsible for reviewing and analyzing reports. The findings of the advisory group will be presented to the leadership team and the board of directors. Although OpenAI does not explicitly name the decision-makers, it is presumed that CEO Sam Altman, President Greg Brockman, and CTO Mira Murati are part of this group.
OpenAI’s commitment to ensuring responsible and ethical AI development includes involving the board of directors in the decision-making process, empowering them to overrule the leadership when necessary.
With the introduction of the archive chats feature and the implementation of the Preparedness Framework, OpenAI continues to prioritize user experience, safety, and ethical practices. As the company takes steps towards mitigating risks associated with advanced language models, it aims to strike a balance between innovation and safeguarding against potential harm.