Stability AI, a company dedicated to developing open-source generative AI models, recently unveiled their StableLM suite of language models. This launch marks their debut into the language model game, which is currently dominated by tech giants such as OpenAI, Meta and Stanford. StableLM is an extension of the company’s foundational AI technology, which promotes transparency, accessibility and support in AI design.
The StableLM suite is the first offering of the developer, featuring 3 billion and 7 billion parameters, with larger 15-billion to 65-billion parameter models to follow. These models are trained on a new dataset, containing 1.5 trillion tokens, with 4,096 token context length. Furthermore, as part of the StableLM suite, the company has fine-tuned their language model with Stanford Alpaca's procedure, which is based on five open-source datasets for conversational agents.
In addition, users may freely inspect, use, and adapt the StableLM base models for commercial or research purposes, subject to the terms of the Creative Commons BY-SA-4.0 license. The license requires you to give credit to Stability AI, provide a link to the license, and indicate if changes were made.
Stability AI is notorious for their commitment to transparency and accessibility in AI design, and the StableLM suite is no exception. The company has open-sourced their prior work, such as the groundbreaking Stable Diffusion image model, and GPT-J, GPT-NeoX models, which have been trained on the public The Pile dataset.
In order to continue encouraging development into the field of AI, Stability AI also kicked off their crowd-sourced RLHF program and is working together with initiatives such as Open Assistant. The company is currently seeking to grow their team with individuals passionate about democratizing access to this technology and experienced in LLMs.
About Stability AI
Founded in 2021, Stability AI is a company leading the development of open-source generative AI models. Their products are designed to create efficient and transparent models with the use of appropriate training. Their models such as Stable Diffusion and Dance Diffusion, offer an open-source alternative to proprietary AI, providing access to foundational AI technology for anyone.
About Jonathan de Haven
Jonathan de Haven is the founder and CEO of Stability AI, as well as the lead researcher at Ontocord.ai. Jonathan has previously worked on computational biology and machine learning, and has been developing models for multiple domains such as image, audio, video, 3D and biology. In 2019 he founded Ontocord.ai, and later in 2021 he established Stability AI to further the development of open-source AI models.