AI experts, including CEOs from OpenAI and Google DeepMind, have published a warning that artificial intelligence poses a risk of extinction if not developed and used responsibly. In a paper published in Nature, the group urges mitigating the risks of AI. They also call for a global effort to develop AI safely and responsibly, as the technology has the potential to do great harm. Concerns include the creation of autonomous weapons that can kill without human intervention and the use of AI to manipulate people or spread misinformation. The warning comes at a time when AI is becoming increasingly powerful. Despite the warnings, development continues on advanced generative AI.
OpenAI, co-founded by Sam Altman and Elon Musk, is an AI research firm that has grown its AI products such as GPT-4 and ChatGPT. However, critics claim OpenAI has been too quick to develop its generative AI products without waiting for proper regulations.
Sam Altman is the CEO of OpenAI and has been a leader in AI research. He has a deep partnership with Microsoft following investments from the company. However, Musk has criticized OpenAI’s AI development strategies and joined tech leaders in creating the Initiative for Life project, which calls on AI developers to cease developing models more powerful than GPT-4 for at least six months. Altman has admitted the need for clear and strict regulations for the development of AI.