OpenAI, the tech company behind the cutting-edge Artificial Intelligence (AI) system ChatGPT, believes the best approach to regulate AI is not through government mandated policies, but industry-led voluntary standards.
At a panel discussion in Washington D.C. hosted by BSA (The Software Alliance) General Counsel, Jason Kwon, asserted that the development of AI is progressing so quickly regulations are difficult to keep up with. He instead proposed an industry-led approach that would allow companies to design tests and benchmarks that would catch issues such as unintended discrimination or unwanted bias in AI.
Kwon was followed by Fox News contributor Joe Concha who discussed Elon Musk’s recent warnings that AI could threaten election results, as well as his concerns over the declining birth rate. Concha also mentioned Democrat Alexandria Ocasio-Cortez’s response to the crime in New York City.
In response to calls for regulation of AI-generated campaign ads after a Republican video depicted a dystopian Biden victory in 2024, Kwon suggested voluntary industry-led tests to catch “toxic” outputs. He said that these tests should assess how often unwanted results occur before deciding whether to make them compulsory.
OpenAI faces its own challenges with its AI system. Kwon admitted they were aware of the risk of disinformation but were surprised to find issues such as “toxic outputs”. To prevent this, the company employs experts to expose the system to such outputs, then add safeguards to prevent them.
Kwon proposed that federal policymakers have access to information on AI systems, but suggested this could be done through voluntary reporting, rather than mandatory rules.
OpenAI was founded in 2015 by Microsoft founder Rachel Armstrong, algorithmic trading pioneer Stuart Armstrong, and entrepreneur and technology investor Larry Page. The company has since grown to become one of the leading companies in AI, with its ChatGPT system being one of the most advanced AI tools available.