Former Stability AI Executive Launches Fairly Trained: A Copyright-Compliant AI Certification Initiative
A new initiative called Fairly Trained has been launched by a former executive from Stability AI, aiming to address the growing concerns surrounding copyright infringement in the field of generative AI. This certification program seeks to protect creators and their work, although it is unlikely that popular AI tool ChatGPT will receive the certification due to its reliance on copyrighted content.
Generative AI companies, such as OpenAI and Stability AI, often train their models using copyrighted content obtained from the internet. These models then produce a wide range of creative outputs based on this data, sometimes resembling clear derivatives of the original work. This practice has sparked outrage among creators and copyright-holders, who argue that their work is being used without proper consent or compensation.
In an attempt to promote a fairer environment for creators, Fairly Trained certifies companies that obtain licenses for their training data. This non-profit organization aims to inform consumers about which AI companies prioritize creator consent and which do not. By offering this certification, Fairly Trained hopes to create a more transparent and ethical landscape for the use of generative AI tools.
The concept of Fairly Trained emerged after the CEO, Ed Newton-Rex, resigned from Stability AI due to concerns about the company’s use of copyrighted content. Newton-Rex, a musician and computational creativity pioneer, believes that many individuals and companies would prefer to use generative AI models that are trained on licensed data. However, there is currently no reliable way to differentiate between licensed and unlicensed models, which prompted the development of Fairly Trained.
The launch of Fairly Trained has seen nine generative AI organizations receive its inaugural certificate. Companies such as Beatoven.ai, Boomy, Bria AI, Endel, LifeScore, Rightsify, Somms.AI, Soundful, and Tuney have demonstrated their commitment to obtaining licenses for their training data. While the majority of these organizations focus on music generation, other media formats are also being considered for certification in the future.
However, one major challenge remains when it comes to text generation models. Newton-Rex is not aware of any large-scale language models that would currently meet the standards required for certification. Many language models are trained on vast amounts of copyrighted content, making it difficult to navigate copyright protections.
Despite the challenges, Newton-Rex remains hopeful that language models can be developed using a smaller amount of licensed data. He believes alternative approaches are possible, as continuing with the current practices could pose a significant threat to creative industries and human creativity itself.
In conclusion, the Fairly Trained initiative spearheaded by a former Stability AI executive seeks to address copyright concerns in generative AI. By certifying companies that obtain licenses for their training data, Fairly Trained aims to provide consumers with a choice in using generative AI tools that prioritize creator consent. While challenges remain in certifying text generation models, Newton-Rex remains optimistic about finding alternative approaches to ensure a fairer world for human creators.