The world of artificial intelligence is rapidly transforming, with advanced language-generating systems and chatbots leading the way in public news headlines. However, what goes largely unnoticed is the fact that many large private AI companies have entrenched their power to the point that a handful of individuals and corporations currently control much of the resources and knowledge in this sector, which will eventually shape its pervasive impacts in the future.
To provide a deeper understanding of the issue, a paper published in the journal Science called for policymakers to pay attention to this ‘industrial capture’ phenomenon, and its relevance is becoming ever more prominent. Generative AI, which underlies technologies such as ChatGPT, is being embedded into software being used by billions of people, like Microsoft Office, Google Docs, and Gmail, disrupting a variety of industries, like the media, educational institutions, and law firms.
The findings in the Science paper pointed to the fact that almost 70% of all AI PhDs were employed in companies in 2020, while 21% were in 2004, and the hiring of AI faculty into such companies had taken an eightfold rise since 2006, which outpaced the overall increases in overall computer science research faculty. Research also revealed that public investment into AI, made by non-defence US government agencies, equalled $1.5bn, with the European Commission making a pledge to contribute to the cause with €1bn.
In comparison, private sector investments topped the chart at an astounding $340bn for 2021. This shows the kind of inequalities in resources and investment between public and private entities, and this uneven power divide means researchers have limited access to the large models built in corporate labs, such as GPT-4, which are secured behind mountains of data and computing power only tech giants like Amazon, Microsoft, and Google can muster.
Meredith Whittaker, president of encrypted app Signal, illustrated this power imbalance with a comparison to the US military’s domination over scientific research in a paper released in 2021. “It is here, in these darker histories”, she penned, “that we confront the steep cost of capture – whether military or industrial – and its perilous implications for academic freedom . . . capable of holding power to account.”
Alex Hanna, director of research at the Distributed AI Research Institute and former member of Google’s Ethical AI team, agrees with the general consensus, and adds that the money being invested in generative AI from the past six-year period has been channelled largely into start-ups such as Anthropic, Inflection, Character.ai and Adept AI, as well as bigger projects like OpenAI. In fact, OpenAI pivoted into a profit-making enterprise in 2019, claiming they needed to rapidly increase investments on computing power, and talent to the tune of $1bn, courtesy of Microsoft.
The outcomes arising from this disparity are wide-reaching and manifold, not least of which that the public is left with limited access to neutral alternatives to corporate AI tech, resulting in no-public alternatives to such resources such as models and data sets, while applications resulting from this technology are likely be prioritized with the sole aim of capital gain rather broader interests and public welfare.
It is therefore paramount for policymakers to heed warnings from researchers, in order to protect academic freedom, and to facilitate the development of beneficial AI applications rather than those rooted in corporate interests. All signs point to this issue being one deserving of the utmost attention and diligence, for its consequences could shape the future for the better, or for worse.