OpenAI, an AI-based company, has been pushing the boundaries of technology with their AI chatbot known as ChatGPT. This has resulted in a huge surge of interest in the AI industry, as both Microsoft and Google have used OpenAI’s technology in their own chatbots.
In March, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and several other academics penned an open letter asking for an AI research pause. The letter involved the use of large language models, including OpenAI’s GPT-4, and was signed by over 25,000 people.
At an MIT event on Thursday, OpenAI’s CEO Sam Altman shared his opinion on the letter. He acknowledged that safety issues need to be addressed in AI development, but felt that the letter from the Future of Life Institute were “lacking most of the technical nuance,” as it lacked conversation about where the research pause was needed.
Altman then added that AI labs, and independent researchers, should take this pause to develop guidelines for the use of AI, which could be watched and audited by outside experts. He also said that safety guidelines are becoming increasingly necessary as AI capabilities improve.
In the past, Altman has been open about his worry surrounding AI development, mentioning such in his earlier written works. With OpenAI’s chatbot a major success, this has only risen the public’s interest in the ethical development of AI. Therefore, it is important that organizations like OpenAI continue to be active in the conversation, and are open to suggestions and regulatory measures that ensure the safety of AI-related technologies.