OpenAI and United Nations Collaborate to Address Risks of Generative AI
OpenAI, a leading artificial intelligence (AI) research lab, has joined forces with the United Nations to address potential catastrophic risks associated with generative AI. This collaboration comes amidst a wave of news announcements surrounding the future of AI and its impact on society.
The partnership between OpenAI and the UN aims to study and analyze the risks associated with generative AI, which encompasses the potential for unjust bias, manipulation of models, and misinterpretations of data. With the rapid integration of generative AI into various products and features, both organizations recognize the need to proactively anticipate and test for these risks.
In light of this collaboration, Google recently announced the expansion of its vulnerability rewards program (VRP) to include scenarios related to generative AI. By leveraging the expertise of security researchers and ethical hackers, Google aims to identify and address vulnerabilities to enhance the safety and security of its generative AI products.
According to Alex Rice, the co-founder and CTO of HackerOne, the ethical hacker community is a valuable resource for exploring emerging technology and identifying potential vulnerabilities. Rice predicts that generative AI will become a significant target for hackers, leading to an increase in bug bounty programs dedicated to this field.
Casey Ellis, founder and CTO at Bugcrowd, also emphasizes the role of AI in hacking toolchains. More than 90% of hackers surveyed by Bugcrowd reported using AI in their hacking endeavors. As AI continues to advance, traditional vulnerability research and bug hunting will be augmented by AI testing, broadening the scope of potential threats.
While the United Nations’ formation of a panel to report on the governance of AI may have limitations, it still holds value in educating the public about the risks and benefits of AI. Shawn Surber, senior director of technical accounts at Tanium, suggests that the UN’s mission could potentially be influenced by the agendas of the panelists. Nevertheless, having OpenAI’s expertise focused on studying nightmare scenarios associated with AI is a positive development.
John Bambenek, principal threat hunter at Netenrich, highlights the importance of starting with real-world implementations of AI before diving into speculative exercises on potential risks. He mentions the examples of facial recognition technology, which has shown positive results in certain applications but has also been associated with human rights violations in policing.
As discussions surrounding the regulation of AI intensify, Kevin Surace, chairman and CTO at Token, raises concerns about the effectiveness of these regulations in preventing rogue states from ignoring the rules. While major AI providers have implemented guardrails to mitigate risks, the proliferation of open-source models may require additional precautions.
In conclusion, the collaboration between OpenAI and the United Nations marks an important step in addressing the potential risks of generative AI. By studying and analyzing these risks, both organizations aim to ensure the safe and responsible development and implementation of AI technologies. With the involvement of security researchers and ethical hackers, the industry is actively working towards enhancing the security of generative AI products.