The Death of AI as a Force for Good: OpenAI’s Transformation Sparks Controversy

Date:

Meet the old new boss. Same as the old boss. These are the ending lines of a song by The Who, written by Pete Townshend, titled We Won’t Get Fooled Again. The song is in many ways emblematic of what we have seen at OpenAI over the last few days. The back-and-forth will go down in legend. But what we have really seen is probably the death of any form of artificial intelligence (AI) as a force for good.

OpenAI was first a non-profit research lab whose mission was to safely develop AI that was at a human level or beyond, often called artificial general intelligence or AGI (with ‘singularity’ defined as the point at which AI goes beyond human intelligence). The emphasis was on safety to avoid what Yuval Noah Hariri once warned of in the Financial Times: Once big data knows me better than I know myself, authority will shift from humans to algorithms.

OpenAI found a very productive route in large language models (LLMs) that generate surprisingly good text via a chatbot that sounds remarkably like a human, but what is more striking is that the bot has access to virtually the entire information storehouse that is the internet. This development has come to be known as ‘Generative AI.’

However, Generative AI is extraordinarily inefficient. Poring through 175 zettabytes of data, the entire web in 2022, is a Herculean task. And this store is growing at warp speed. A zettabyte is equal to 1,000 exabytes, and an exabyte is 1 trillion gigabytes. To put one zettabyte in perspective, consider this: It would take 2,535 years to stream one zettabyte to your device, even if the device had access to some of the fastest commercial networks available today, which are at about 100Gbps (Gigabits per second).

It should go without saying that developing and implementing those large models requires enormous amounts of computing infrastructure, and as a result, a massive hoard of money.

See also  OpenAI CEO Sam Altman Concludes AI Models are Not Dependent on Size

This was a conundrum for OpenAI. How does a non-profit lure enough investment for such a massive enterprise? The powers that be dreamt up a scheme that they may have thought was a perfect answer: Create a commercial entity to draw external investors with capped profits. Almost everyone in the company worked for this new for-profit arm. Limits were placed on the company’s commercial pickings, however.

The profit delivered to investors was to be capped — for the first backers at 100 times what they put in — with overflows to go back to the non-profit part. This drew Microsoft as a huge backer, which pumped billions of dollars into OpenAI to feed its zettabyte appetite.

The overall structure was governed by the original non-profit board, which answered only to the goals of OpenAI’s original mission, and had the power to fire Altman. Also, only a minority of directors could hold shares in the for-profit entity, and the for-profit company’s founding documents required it to prioritize public benefits over profits. But all this is moot now. The truth is that Altman was not in fact replaceable by the board, as we have seen.

Now that the old boss is back, most of the old board gone and directors who will probably be more pliant have taken over, OpenAI is in a free-for-all situation, which could impact the entire sector. There was nothing stopping Google in its bid for AI hegemony (and neither indeed Microsoft, which has an army developing its own AI products based on OpenAI advances, among others). The point is that now nothing can stop OpenAI from participating in this free-for-all either, despite Altman’s agreement to submit to an internal review of his behaviour as CEO of OpenAI.

See also  OpenAI's ChatGPT Accused of Data Theft, Faces Lawsuit

These developments aren’t good for us. The last barrier to AI’s unbridled commercialization has fallen. Vijay Chandru of the Indian Institute of Science and the founder of Strand Genomics, in a guest editorial in the December 2020 issue of Current Science, a well-respected scientific journal published by the Indian Academy of Sciences, identified the fact that humanity faces a stark reckoning with the pace of technological and scientific innovation, and that we need to get our act together. Chandru says, The second machine age also warns us of the anthropological impact of the unbridled power of digital machines, which will also create difficult social problems such as inequity in access to technology, the perverse use of technology to spread fake news and the ability of technology to influence traditional democracies.

Chandru also talks about the rate of change in biotechnology and cites Flatley’s law, a biotechnology counterpart to Moore’s law. Named after Illumina CEO Jay Flatley, this law posits that the cost of sequencing DNA has dropped 1,000 times more than Moore’s Law, from $100 million per human genome to only $1,000. Chandru says The next wave of the genomics revolution comes from our ability to write on genomes, i.e. to edit, or more accurately proofread, and modify them. This could have enormous impact, because potentially we will be able to do this for humans, plant genomes, animals, and microbes.

We should be concerned by the unbridled growth of biotechnology. AI is in the public eye and being closely watched by governments, even if only to appropriate AI advances for their own causes. We know less about the impact that biotechnology could have, and risk ignoring an Armageddon larger than any that AI could bring on.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

UBS Analysts Predict Lower Rates, AI Growth, and US Election Impact

UBS analysts discuss lower rates, AI growth, and US election impact. Learn key investment lessons for the second half of 2024.

NATO Allies Gear Up for AI Warfare Summit Amid Rising Global Tensions

NATO allies prioritize artificial intelligence in defense strategies to strengthen collective defense amid rising global tensions.

Hong Kong’s AI Development Opportunities: Key Insights from Accounting Development Foundation Conference

Discover key insights on Hong Kong's AI development opportunities from the Accounting Development Foundation Conference. Learn how AI is shaping the future.

Google’s Plan to Decrease Reliance on Apple’s Safari Sparks Antitrust Concerns

Google's strategy to reduce reliance on Apple's Safari raises antitrust concerns. Stay informed with TOI Tech Desk for tech updates.