The large language model sector continues to expand as StabilityAI, the creator of Stable Diffusion – a popular image-generating application, has launched a set of open-source language model tools known as StableLM. Currently, Stable LM is available in alpha versions and consists of 3 and 7 billion parameter models, with the plans to create 15, 30 and 65 models in the near future. In line with that, the organisation is looking forward to developing a 175 billion parameter model.
For comparison, GPT-4 has a parameter count estimated to be about one trillion, being 6 times higher than GPT-3. Despite the smaller parameter size, StabilityAI noted that StableLM is performing surprisingly well in conversational as well as coding tasks due to the larger and enriched dataset it is built on.
Developers, traders and other users are given the opportunity to try out the 7 billion parameter StableLM model live on HuggingFace. So far, the website has been facing some issues with availability due the overwhelming number of users.
StabilityAI is a start-up that is well-known for its image generating application, Stable Diffusion. This tool is used to create random art using artificial intelligence. The company was founded in 2020 and operates out of San Francisco and London.
The company is headed by Gabriel Gold Schapira. He is a computer science researcher with a degree from Stanford University. He has worked with a variety of organizations and gained experience in both software engineering and artificial intelligence. In his current role, he is responsible for leading StabilityAI as the product lead. He is also actively involved in research for the company, in which he is looking for ways to improve their models and product offerings.