Stability AI has launched a suite of large language models called StableLM that seek to take on OpenAI’s GPT-4. The language model is open-sourced, granting developers the ability to freely inspect, use, and adapt it for commercial and research purposes. In terms of parameters, StableLM is available in 3bn and 7bn sizes and soon to increase up to 15bn up to 65bn parameters. This is significantly smaller than OpenAI’s GPT-3, which has 175bn parameters.
Stability AI is continuing to make Artificial Intelligence (AI) accessible to all. This means that developers and researchers can generate text and code with the StableLM model and even use it for a wide range of downstream applications because of its combination of small size but high performance. This results from training the language model on an open-source dataset called The Pile which contains 1.5 trillion tokens across numerous areas such as Wikipedia, Stack Exchange, and PubMed.
StableLM is now available on GitHub and Hugging Face, an AI models and code hosting platform. This allows people to build their own language models without having to rely on commercial software. This is part of the mission of Stability AI to make AI technology accessible, transparent, and supportive. Such language models are becoming increasingly important with the rise of the digital economy and as such, Stability AI wants to ensure that everyone has a voice in the development.
Stability AI has recently received some spotlight through a lawsuit from Getty Images, accusing the start-up of brazenly infringing their intellectual property by copying more than 12 million images without permission. Despite this, Stability AI continues to make strides in the world of AI technology with its launch of StableLM and its commitment to accessibility.