The Future of Life Institute recently initiated a letter to the rest of the artificial intelligence sector to pause the development of AI systems with the power of GPT-4 for a minimum duration of six months, owing to the many risks of them. Many well-known figures, including Elon Musk, Steve Wozniak of Apple, and Emad Mosaque, CEO of StabilityAI co-signed this letter.
However, in light of OpenAI’s highly successful launch of ChatGPT-4, a large language model, some of the august signatories may have had an alternate motive for asking for GPT-4 to be paused: to catch up in an AI race that they feel they might be losing. Rather than attempting to pause the development of AI by such drastic measures, a much better solution is to adopt the basics of good leadership and create transparency and inclusion in the AI sector.
The analysis of the first 500 signatures on the ‘Pause’ letter uncovered some interesting results. Around 20% of the signatories were from private entities that had invested in the AI sector and may have sought to gain from the pause. This group consisted of the CEOs, founders, or directors of popular companies such as Tesla, Getty Images, DeepMind, DeepAI, Scale AI, Big Mother AI, NEXT.robotics and the ominously named StabilityAI. Amongst the signatories was Elon Musk, a past co-founder of OpenAI and current founder of X.ai Corp, whose incorporation documents were filed earlier in the year. Soon after signing the Pause letter, Twitter also purchased 10,000 graphics processing units (GPUs) with the intention of developing its own large language model to compete with OpenAI.
The Future of Life Institute’s letter is largely composed of vague predictions of the possible harms of powerful artificial intelligence in the future. The DAIR (Distributed AI Research) Institute recognizes that the development of AI is inexorable, but that doesn’t mean the risks are. Led by Dr. Timnit Gebru, formerly of Google’s Ethical AI team, and fired for refusing to retract a paper of hers on the dangers of large language models, their team believes AI can still be a positive force, with the key condition of diverse, deliberate processes.
The DAIR Institute suggests that rather than calling for a pause of inevitable advancements in AI, we should begin focusing on managing the threats posed by existing AI technology. They recommend regulation that forces companies to make their data open and transparent, as well as giving more weight to voices from those that are likely to be hurt by AI, including immigrants, women, and gig workers. If such controls are put in place, AI can be a servant to humanity, rather than a hazard.
The debate about the future of AI is one that will not go away. But continuing with the development and deployment of powerful large language models, such as GPT-4, is not easily stoppable, even if many prominent members of the industry are asking for it. A much better solution is to focus on the present – treat the development of AI with the same rigour one applies to any organization, with transparency and inclusion, and also make sure to prioritize voices from those that may be at risk due to these advancements.
The companies mentioned in this article have important roles to play in the further development and deployment of AI systems. Tesla Inc is an American electric vehicle and clean energy company spearheaded by CEO Elon Musk which is embracing advanced AI technology in its cars by equipping them with semi-sentient functions. Neuralink Corp, also founded by Musk, is a start up developing brain implants to allow humans to achieve symbiosis with AI. Google’s Ethical AI team, led by Dr. Timnit Gebru until her firing, was devoted to building responsible AI systems. OpenAI is a San Francisco startup behind ChatGPT-4 and other large language models, while stability AI is a company that offers Artificial Intelligence services to healthcare and risk management. Finally, X.ai Corp, the company established by Elon Musk, is furthering the development of AI technology.