The OpenAI saga demonstrates how big corporations dominate the shaping of our technological future
The dramatic firing and reinstatement of Sam Altman as boss of OpenAI was more than a power shuffle. It was a glimpse at the overwhelming influence that big corporations – and a few individuals – possess when it comes to shaping the direction of artificial intelligence.
And it highlights the need to reassess the development of technology which has the potential to massively alter society, but where the emphasis is not always on the public good.
When OpenAI was founded in 2015, it was apparently committed to working on artificial intelligence (AI) for the benefit of humanity. Part of this lay in establishing itself as a non-profit organization, deviating from the money-making motives of the wider tech industry.
Instead, the company aimed to openly collaborate with other institutions – sharing research and building a safe and friendly AI development environment. Then in 2019, OpenAI took a different course, transitioning into a structure designed to make a profit (albeit one that is capped at 1,000 times any annual investment).
According to OpenAI, the non-profit model had hindered its ability to attract investment and retain top talent. Unable to offer competitive salaries and stock options, it struggled to keep pace with the likes of Google and Facebook.
The new profit-seeking structure aimed to resolve this. And it also paved the way for OpenAI to receive a very handy US$1 billion (£790 million) of investment from Microsoft. By 2023, Microsoft had increased its investment to US$13 billion and arranged for OpenAI to use its cloud computing platform.
But the dramatic change in OpenAI’s operations also sparked debate over whether the company could continue with its founding goal of building safe and beneficial artificial general intelligence for the benefit of humanity. Some now suggest that profit-driven motives will inevitably prevail.
It is also a development which reflects a core tension in cutting edge technological research: the contrast between a conventional, competitive profit-driven approach, and a collective, open ethos that aims to contribute to improving the world.
For since it rapidly expanded into a multi-billion dollar enterprise, some claim that OpenAI has struggled to uphold its initial commitment to societal benefit. Fears have been raised over everything from self-regulation to the potential development of ever more powerful AI without proper ethical considerations or precautions.
And of course, OpenAI is not alone. Other large corporations hurriedly developing AI technology include Amazon, Facebook, and Google – all vast enterprises with deep pockets and big ambitions. But their collective search for profits demonstrates the essential role that state funding should have in AI research.
But safeguards are needed to protect against misuse. And research suggests that these protections require ongoing human oversight through policy and funding that is not motivated solely by profit.
It might not be straightforward, and would require much improved access to research resources, better regulatory powers, and a new level of cooperation between governments and the private sector. But it could also involve a bold new vision of technology’s role in a democratic digital economy designed to decentralize power and profits.
Ultimately, the OpenAI saga should alert us to an important lesson about democratising technological governance. Alternative funding and governance structures must be explored to develop AI equitably, prioritizing public benefit over investor returns.
With thoughtful regulation and democratic ownership models, innovations like AI could be used to usher in an age of shared prosperity.
The squabbles at OpenAI represent a mere skirmish in a far greater struggle. And that struggle will determine whether society is able to collaborate and participate in innovation for the collective good – or if technological advancement remains tethered to the whims of a few powerful capitalists.