OpenAI CEO Believes Making Models Bigger is a Thing of the Past

Date:

OpenAI has been a much-discussed organization in the tech world, with its research and products providing interesting implementations of artificial intelligence (AI). Recently, their CEO Sam Altman led the way in the creation of GPT-4, the company’s latest long awaited upgrade to its language models (LLMs). This large model has been quite successful—so much so that a group of prominent experts and tech company leaders, including Elon Musk, signed a letter urging a moratorium of AI experimentation greater than OpenAI’s GPT-4.

Given the success of GPT-4 and the OpenAI’s research, one might expect them to continue to pursue creating ever-larger models to produce more impressive results. However, Altman has surprised many in the tech world by saying that the practice of simply increasing models’ size to gain improvements has come to an end. During an MIT event, Altman stated that “[we]are at the end of the era where it’s going to be these, like, giant, giant models. We’ll make them better in other ways.”

OpenAI has had a long history of working on large language models. It all started with the GPT-2, their first landmark model, which was released in 2019 with a parameter count of 1.5 billion–adjustable variables that help its AI “learn” from given data. The following year saw an incredible leap in size and power with the GPT-3, which held an estimated 175 billion parameters. Compared to that, GPT-4 was an absolute beast, with its size increasing to one trillion parameters. While OpenAI has not revealed the true size of the GPT-4, its impact upon the AI industry makes it clear that a tremendous increase in size and power was achieved.

See also  Pope Francis Urges Responsible Use of AI for Peace & Humanity

Nonetheless, OpenAI’s own technical report suggests that further increases to model size may no longer yield results. As Altman noted, this is a similar situation to the “gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number.” Interestingly, Altman conceded that parameter counts may still increase, but his primary focus has shifted away from simply scaling up models. Rather, he said that their goal is “to deliver the most capable, useful and safe models.”

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.