Stability AI has recently launched StableLM, an open-source language model that allegedly rivals ChatGPT. This new model is available in three versions: the Alpha model contains 3 and 7 billion parameters, with the 15-65 billion parameter models following suit. Developers are given a freedom to inspect, use, and adjust the StableLM base models according to their own needs and requirements, as permitted by the CC BY-SA-4.0 license.
Building on the successful open-source datatset “The Pile”, now three times larger with 1.5 trillion tokens of content, Stability AI successfully released Stable Diffusion in 2022, a revolutionary image model. Offering a transparent, open and scalable alternative to proprietary AI, developers are now able to use and utilize the StableLM suite of models that generate both text and code and provides valuable applications. Despite its small size of 3-7 billion parameters, compared to GPT-3’s 175 billion parameters, StableLM exhibits surprising high performance in conversational and coding tasks.
In addition to the releases of the StableLM model, Stability AI also presents the research models that have been instruction fine-tuned and primarily uses five recent open-source datasets for conversational agents: Alpaca, GPT4All, Dolly, ShareGPT, and HH. These fine-tuned models will only be used for research purposes and are to be distributed under a noncommercial CC BY-NC-SA 4.0 license.
At Stability AI, the aim is to provide the most advanced and accessible AI technologies to all. Founded in 2017, the company is driven to deliver unparalleled solutions to the global IT industry. All the while, they are committed to promoting open-source and providing high-performance models tailored to the individual needs of their users. With the launch of the StableLM suite of models, they are paving the way for further potentials in AI applications and research.