The era of ever-larger artificial intelligence models may be coming to an end, according to the CEO of OpenAI, Sam Altman. Speaking at an event hosted by MIT, Altman suggested that progress in the field of AI will no longer hinge on large-scale models, and further progress will come from other means such as improving model architectures, data efficiency and algorithmic techniques.
OpenAI is a research lab dedicated to developing artificial general intelligence. Their CEO, Sam Altman, has worked as a venture capitalist for Y Combinator, as well as being a developer, entrepreneur and philanthropist. He is well-known for his championing of OpenAI’s mission to “solve intelligence and make it widely accessible.”
A likely factor in the shift away from the traditional “scaling is all you need” approach is the immense expense of training and running powerful graphics processes, which are necessary for large language models. For example, ChatGPT reportedly required over 10,000 GPUs to train, and demanded additional resources to remain operational.
The market is largely dominated by Nvidia, holding around 88% of the GPU market share. Its recent H100 models designed for AI and HPC use can cost as much as $30,603 per unit, and may be even more expensive on other online platforms. Financing state-of-the-art LLM models may include hundreds of millions of dollars’ worth of computing alone, as observed by Ronen Dar, the cofounder and chief technology officer of Run AI.
In the face of rising costs and diminishing returns, the economics of model scale can no longer justify this approach. Instead, Altman suggests that research should focus on better model architectures, data efficiency as well as advanced algorithmic techniques. Transform 2023, a major AI executive gathering in San Francisco on July 11-12 could offer insight as to how AI investments might be optimized, and how to avoid common pitfalls. Consequently, a new era of AI is slowly backing away from giant models and towards more sustainable means for development.