Google Enhances Bug Bounty Program to Address Vulnerabilities in Generative AI
In an effort to further strengthen the security of its systems, Google has recently expanded its Vulnerability Rewards Program (VRP) to encompass bugs related to generative artificial intelligence (AI). With this move, the company aims to ensure the responsible use of AI and benefit both developers and consumers. The bug bounty program, which has already gained recognition for its effective user protection measures, has paid out millions of dollars in rewards over the years. In 2022 alone, it granted over $12 million to security researchers.
The decision to extend the VRP to cover under-the-radar issues with generative AI aligns with Google’s commitment to advancing the detection of vulnerabilities in AI systems, alongside other leading AI companies. Laurie Richardson, the Vice President of Trust & Safety, and Royal Hansen, the Vice President of Privacy, Safety, and Security Engineering at Google, acknowledged that generative AI introduces new concerns that differ from traditional digital security. These concerns include potential unfair biases, manipulation of models, and misinterpretations of data known as hallucinations.
Google has now published a set of guidelines specific to the AI-focused portion of its VRP. These guidelines outline the cases that would be considered within the program’s scope. Moreover, the general Vulnerability Rewards Program of Google offers payouts ranging from $500 to $31,337 for severe vulnerabilities that allow for the takeover of a Google account. Even the lowest eligible security vulnerability is rewarded with a minimum of $100.
Richardson and Hansen expressed their expectation that the inclusion of generative AI vulnerabilities in the VRP will encourage security researchers to submit more bugs, driving the goal of a safer and more secure generative AI. They also emphasized the potential for collaboration with the open-source security community and industry experts, as well as the collective effort to make AI safer for everyone.
Google’s initiative highlights the growing importance of addressing the security risks associated with AI advancements. By incentivizing security research and applying supply chain security measures to AI, the company aims to foster collaboration and promote the safety of AI technology. This development reinforces Google’s commitment to protecting users and maintaining trust in the digital ecosystem.
With its expansion of the VRP, Google demonstrates its dedication to staying ahead of emerging threats and continuously improving the security of its AI systems. By engaging with the global research community and offering rewards for identifying vulnerabilities, Google reinforces the shared responsibility of safeguarding AI and ensuring its responsible development and deployment.
Overall, Google’s decision to extend its bug bounty program to cover vulnerabilities in generative AI represents a significant step toward enhancing the security and reliability of AI systems. It serves as a testament to the company’s ongoing commitment to transparency, collaboration, and user safety in the rapidly evolving landscape of AI technology.