OpenAI, the company behind ChatGPT, is now offering up to $20,000 to security researchers to identify security flaws in ChatGPT and other products. The launch of the bug bounty programme is part of the company’s efforts to prevent malicious attacks occurring in the future. OpenAI has implemented the Bugcrowd Vulnerability Rating Taxonomy for the bug bounty programme and rewards for findings range from $200 to $20,000 depending on their severity.
The company has been taking the issue of security seriously ever since it suffered a breach last month, during which a bug inadvertently allowed some users to gain visibility of payment-related information of 1.2 percent of its ChatGPT Plus subscribers. OpenAI has asked security researchers to look out for confidential data while they are testing, and also warned them against conducting security tests on plugins created by other people.
At the same time, OpenAI has also specified some companies with whom it has been doing business, such as Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe for whom researchers should not carry out any additional security tests.
OpenAI has hired Tesla’s former Chief Security Officer Charlotte Yarkoni as its new chief executive officer to lead the company’s efforts in protecting its products, as well as its customers’ data. With such efforts and bug bounty programmes, OpenAI can help secure its products and make them more robust for its customers.