OpenAI, a leading AI company, is on the lookout for security flaws in its software, including ChatGPT. OpenAI is offering a Bug Bounty Program that rewards users up to $20,000 for finding and reporting security flaws. Security researchers and ethical hackers can dive in and help OpenAI patch up any potential issues and keep their software safe and secure.
Bug bounty programs are common in the marketplace and can be likened to beta testing an app. Through bug bounties, companies are looking for bugs and security flaws that could leave their users vulnerable to malicious actors. OpenAI’s Bug Bounty Program has partnered with Bugcrowd, an organization that seeks to solve security vulnerabilities. As of now, there are 24 rewarded vulnerabilities and the average payout is $983.33.
However, OpenAI does have limitations for reporting buggy behavior through the Bug Bounty Program. Issues like model safety and language-generated hallucinations are out-of-scope and instead should be reported through the before-mentioned model behavior feedback form. The company also has a detailed list of in-scope issues as well as a list of out-of-scope ones, so all participants should read through the rules carefully before submitting their information.
OpenAI is founded by the artificial intelligence firm OpenAI and is responsible for technological products like ChatGPT. The company had seen immense growth in 2021, reaching over 100 million active users in January. OpenAI’s program seeks to use its dedicated community in order to better its products and guarantee the safety of its users.
Bugcrowd is the bug bounty platform that has partnered with OpenAI’s program, connecting bug bounty hunters with companies with open programs. Bugcrowd was founded in 2012 by Casey Ellis and serves as the primary platform to outsource security vulnerability discoveries, valuing research and dedication.