A recent report by researchers from Stanford University has revealed a lack of transparency in the artificial intelligence (AI) models developed by leading companies such as OpenAI, Google, and Meta Platforms. The report, conducted by Stanford’s Human-Centered Artificial Intelligence research group, assessed the transparency of popular foundational AI models and found that none of them were particularly open.
The researchers compiled their rankings based on metrics that assessed how much information was disclosed by the model creators and how the systems were utilized. Meta Platforms’ Llama 2 model ranked as the most transparent with a score of 54%, followed by BigScience’s BloomZ with 53% and OpenAI’s GPT-4 with 48%. However, none of the models achieved high marks, highlighting the need for greater transparency in their development.
The report also emphasized the importance of companies providing more information about the data and human labor involved in training these AI models. While Meta’s open-source Llama 2 and BloomZ scored relatively higher due to their release of research on model creation, even open-source models fell short when it came to disclosing their societal impact and avenues for addressing concerns related to privacy, copyright, and bias.
OpenAI, despite its opaque design approach, scored 47% in the index. Although the company rarely publishes research or disclose data sources used to train GPT-4, there is ample public information available through OpenAI’s partners. These partnerships have integrated GPT-4 into various applications and services, providing information about its functionality.
The researchers behind the report believe that the transparency index can serve as a benchmark for governments and companies working in the AI space. The release of the European Union’s Artificial Intelligence Act, which aims to establish regulations for AI, further underscores the need for transparency in the field. Under this act, companies using AI tools like GPT-4 will be required to disclose copyrighted materials used in their development. The Foundation Model Transparency Index can aid compliance with such regulations.
Stanford’s Human-Centered Artificial Intelligence research group intends to regularly update the transparency index, expanding its scope to include additional models. This ongoing monitoring aims to promote greater transparency in the development of AI models while driving accountability within the industry.
Despite the presence of open-source communities focused on generative AI, major companies in the field, like OpenAI, have maintained a level of secrecy to protect their research from competitors and address safety concerns. However, increased transparency remains crucial for the responsible development and deployment of AI technologies.
The lack of openness in the AI models developed by industry leaders highlights the need for greater transparency and disclosure of information. As AI continues to shape various aspects of society, it is essential for companies to provide more details about their models’ impact, training methodologies, and potential risks.