AI Stability Launches ChatGPT-Style Language Models

Date:

Stability AI, the startup behind the generative AI art tool Stable Diffusion, recently released a suite of text-generating AI models trained to rival current programs such as OpenAI’s GPT-4. Dubbed StableLM, the models are currently in “alpha” and available on GitHub and the Hugging Spaces platform for hosting AI models and code.

The suite of models can generate both code and text and is designed to be more efficient than larger models. The models were trained on a dataset called The Pile composed of internet-scraped text samples. The training set was created by Stability AI and is three times larger than the original version. Despite this, the models still have the tendency to produce toxic responses to certain topics and to hallucinate facts.

A large part of the appeal behind these models is their open-source design. As the company notes, “Language models will form the backbone of our digital economy, and we want everyone to have a voice in their design.” This means that anyone can check under the hood to verify the performance and detect potential risks in the models. By open-sourcing the models, all voices in the AI space are given an opportunity to contribute to the discussion.

Additionally, Stability AI has fine-tuned some of their models. For example, the models have been trained on Stanford’s Alpaca technique and are able to respond to commands like “write a cover letter for a software developer” and “write lyrics for an epic rap battle song.”

Indeed, the company has never been shy about courting controversy in the past. It has been accused of disregarding copyright laws by creating AI art tools from web-scraped images. Furthermore, the models have been used to generate pornographic deepfakes and other graphic depictions of violence.

See also  ChatGPT Could Return to Italy If OpenAI Follows the Guidelines

Despite the controversies, StableLM holds great potential to revolutionize and shape the AI landscape. Open-sourcing the models will give researchers more control over the development of language models and will hopefully lead to increased safety and better performance.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Security Breach at OpenAI Raises Concerns for AI Industry

Security breach at OpenAI raises concerns for AI industry, emphasizing the need for enhanced cybersecurity measures in the face of growing threats.

Georgia Lawmakers Consider State Standards for Emerging Artificial Intelligence Technology

Georgia lawmakers are considering state standards for emerging AI tech as it impacts public services, from healthcare to transportation.

Multi-faith Event in Hiroshima: World Religions Unite for AI Ethics

Join us at the Multi-faith Event in Hiroshima on July 9-10, where world religions unite for AI ethics and the future of technology.

Moncton Joins Bloomberg Philanthropies Data Alliance

Join Moncton, Oakville, and Ottawa as they tap into data and AI through Bloomberg Philanthropies City Data Alliance to enhance city services.