AI Stability Launches ChatGPT-Style Language Models

Date:

Stability AI, the startup behind the generative AI art tool Stable Diffusion, recently released a suite of text-generating AI models trained to rival current programs such as OpenAI’s GPT-4. Dubbed StableLM, the models are currently in “alpha” and available on GitHub and the Hugging Spaces platform for hosting AI models and code.

The suite of models can generate both code and text and is designed to be more efficient than larger models. The models were trained on a dataset called The Pile composed of internet-scraped text samples. The training set was created by Stability AI and is three times larger than the original version. Despite this, the models still have the tendency to produce toxic responses to certain topics and to hallucinate facts.

A large part of the appeal behind these models is their open-source design. As the company notes, “Language models will form the backbone of our digital economy, and we want everyone to have a voice in their design.” This means that anyone can check under the hood to verify the performance and detect potential risks in the models. By open-sourcing the models, all voices in the AI space are given an opportunity to contribute to the discussion.

Additionally, Stability AI has fine-tuned some of their models. For example, the models have been trained on Stanford’s Alpaca technique and are able to respond to commands like “write a cover letter for a software developer” and “write lyrics for an epic rap battle song.”

Indeed, the company has never been shy about courting controversy in the past. It has been accused of disregarding copyright laws by creating AI art tools from web-scraped images. Furthermore, the models have been used to generate pornographic deepfakes and other graphic depictions of violence.

See also  Photography Award Winner Acknowledges Image Was Generated by AI

Despite the controversies, StableLM holds great potential to revolutionize and shape the AI landscape. Open-sourcing the models will give researchers more control over the development of language models and will hopefully lead to increased safety and better performance.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

AI Films Shine at South Korea’s Fantastic Film Fest

Discover how AI films are making their mark at South Korea's Fantastic Film Fest, showcasing groundbreaking creativity and storytelling.

Revolutionizing LHC Experiments: AI Detects New Particles

Discover how AI is revolutionizing LHC experiments by detecting new particles, enhancing particle detection efficiency and uncovering hidden physics.

Chinese Tech Executives Unveil Game-Changing AI Strategies at Luohan Academy Event

Chinese tech executives unveil game-changing AI strategies at Luohan Academy event, highlighting LLM's role in reshaping industries.

OpenAI Faces Security Concerns with Mac ChatGPT App & Internal Data Breach

OpenAI faces security concerns with Mac ChatGPT app and internal data breach, highlighting the need for robust cybersecurity measures.