OpenAI, a leading artificial intelligence lab, recently unveiled its latest chatbot, ChatGPT. The impressive capabilities of this bot has stirred up much excitement and investment in the world of AI. However, OpenAI’s CEO, Sam Altman, made a surprising statement late last week cautioning against the belief that further advances in AI will come from making models bigger. He noted that it is now the “end of the era” of giant models.
Altman’s statement caught many off guard, as it contrasts with the current trend of investing heavily in creating bigger and more powerful machine learning algorithms, such as the latest GPT-4 project. OpenAI’s research strategy has been to take existing machine-learning algorithms and drastically increase their size. This process has cost more than $100 million.
At the same time, rival companies like Microsoft and Google have tried to keep up with OpenAI’s technology and have invested heavily in AI by introducing chatbots like Bing and Bard. With the rise of chatbots, people have begun to explore their numerous potential applications.
Nick Frosst, a former Google AI employee and current cofounder of Cohere, agrees with Altman’s assessment that scaling up is no longer the way to go. He believes transformers, a fundamental type of machine learning, can be significantly improved by exploring alternative routes, such as developing new AI models, injecting human feedback into algorithms, and further tuning algorithms.
This announcement from OpenAI is likely to lead to numerous changes in the AI industry. Companies will have to find innovative ways to create algorithms without relying solely on scaling up model size. Additionally, OpenAI’s legacy in AI has been impressive and it is unclear who will be the next leader in the competitive AI race.