Sam Altman, the man behind ChatGPT, has expressed his growing concern about the possible misuse of artificial intelligence (AI) technology he helped create. According to Altman, the worst-case scenario could lead to lights out for humanity. In a recent interview with StrictlyVC, he said, I’m more worried about an accidental misuse case in the short term. Altman expressed concern that the complicated software behind ChatGPT might be misunderstood, leading to disastrous outcomes.
One of his biggest concerns is the possibility of AI creating new diseases or launching cyberattacks and sowing disinformation. Furthermore, Altman warns that bad actors could exploit this technology, and society has a limited amount of time to figure out how to react to it. He signed a statement saying that AI is as dangerous as nuclear war, and he fears that if the technology goes wrong, it could seriously damage the economy.
Altman is concerned that as AI models get better and better, the users will have less and less of their own discriminating thought process. He also suggested that certain jobs will be wiped out fast by AI and people will get dumber as the technology gets smarter. Altman cautioned that ChatGPT could lie, and indicated that certain jobs will be eliminated pretty quickly.
Altman’s biggest fear is that AI could create a worldwide dystopia. In a blog post earlier this year, he wrote that a misaligned superintelligent AGI could cause grievous harm to the world. Despite his concerns, Altman remains hopeful that society will be able to adapt to and regulate the use of AI, preventing its misuse.
Overall, Altman’s cautionary statements about the potential dangers of AI technology shed light on the importance of responsible use and regulation of this powerful tool. As AI models continue to evolve and improve, it’s essential that society takes a proactive approach to ensuring that AI is used for good while avoiding disastrous consequences.