Superintelligence is an artificial intelligence that exceeds human cognitive performance in all domains of interest. Swedish philosopher Nick Bostrom, author of the book Superintelligence: Paths, Dangers, Strategies, argues that AI may destroy humanity if we are not prepared for it. OpenAI boss Sam Altman issued a statement warning that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Altman believes we need to start preparing now before it’s too late. In the article, it is discussed how superintelligence could offer a life of leisure, cure diseases, eliminate suffering, and transform humanity into an interstellar species. However, it could also replace humans as the dominant life form on Earth and even view us as superfluous to its own goals, leading to our extinction. To prevent these outcomes, humanity needs to develop AI safety measures.
OpenAI is arguably the leading private AI firm, co-founded by Sam Altman in 2015. The company aims to better understand and mitigate AI risks. Altman believes in the potential of superintelligence to transform humanity positively, although he recognizes the need to prepare for AI’s risks.
Sam Altman is a tech entrepreneur and investor who co-founded OpenAI and serves as an entrepreneur-in-residence at the venture capitalist firm Y Combinator. Altman is a proponent of developing AI safety measures while warning of the risks of superintelligence.
(ChatGPT is a chatbot created by OpenAI that is the fastest-growing app in history.)