Mira Murati, the Chief Technology Officer at OpenAI, believes that regulators should participate heavily when developing safety standard systems for artificially intelligent models and other technologies. In a recent interview with the Associated Press, Murati mentioned that OpenAI took a slow approach when launching their GPT-4 model, as they wanted to audit any imbalances and unexpected outcomes.
Murati further pointed out that a proposed 6-month pause on development of AI is not the best way to create safer systems and that we are still very far from achieving an artificial general intelligence (AGI). Prominent figures such as Elon Musk and Gary Marcus have called for increased regulation as well as a pause on international AI progress while others such as Bill Gates, Yann LeCun, and Andrew Ng have taken a stand against the pause in development.
OpenAI is a prominent and influential artificial intelligence technology company that has recently come under fire in a number of areas, including government regulations. In Italy, the company’s GPT model was legally banned, resulting in a hard deadline of April 30 on OpenAI to be compliant with EU regulations. This recent development has had a domino effect on the European cryptocurrency industry where trading bots relying on the GPT API could very likely be forced to go elsewhere.
Mira Murati is the Chief Technology Officer at OpenAI and an outspoken source when it comes to the safety of artificially intelligent technologies. In Murati’s opinion, the development of AI should be heavily monitored by government regulatory departments and standards should be established to reduce any downsides to deployment. Murati has also been very vocal about her objections to the idea of a 6-month pause on development as well as the opinions of those who believe that GPT-4 is close to achieving AGI. While the path to AGI is long and difficult, Murati is confident that OpenAI is taking the necessary precautions to keep their models safe for use.