AI chatbots have taken the world by storm and the competition is growing more intense. Google Bard, several Chinese firms, and now Stable Diffusion are adding to the fray with their ChatGPT competitor, StableLM. This new model only has 3 billion to 7 billion parameters compared to OpenAI’s 175 billion model. Nevertheless, the developers behind StableLM claim that with proper training, their model can produce high-performance yet remain small and efficient.
StableLM is currently available in an Alpha version over at Hugging Faces, though it is early in its development and could have performance issues. Additionally, the model does not yet feature reinforcement-based learning like ChatGPT; however, the developers behind Stable Diffusion have plans to eventually offer a 175 billion parameter version to match ChatGPT in the future.
The true strength of StableLM lies in its open source structure. Developers who cannot yet leverage the power of ChatGPT due to cost or technology can use StableLM as a introduction to AI chatbots. Through StableLM, bedroom coders and indie developers can explore the possibilities of AI technology without the expense of the top-tier models.
Stable Diffusion is led by a team of AI experts, such as their CTO Taylor Rees who is a Senior Software Engineer with extensive experience in developing solutions using Machine Learning and Natural Language Processing. These experts bring their expertise to the masses, making AI more accessible to a wide range of developers who may otherwise not have had the opportunity.
The AI chatbot space is an impressive arena, and the introduction of competitors such as StableLM by Stable Diffusion is an exciting addition. It will be interesting to see how this technology can be used to democratize AI, as well as how room coders and indie developers will move forward with this powerful open source structure.