OpenAI, founded by luminaries including Sam Altman and Ilya Sutskever, has recently published an article addressing the growing threat of superintelligence. The article, co-authored by Altman, Brockman, and Sutskever, endeavors to recognize and address the potential implications of superintelligence for human life and society.
While admitting that superintelligence could have the potential for great good, providing for instance a boost to economy and quality of life, it also presents a set of risk and issues. In an effort to combat this, Altman et al. suggest the establishment of an international authority, akin to the International Atomic Energy Agency, to regulate the development of superintelligence above a predetermined threshold. Along with this, the authors champion responsible individual action from companies and the implementation of strong public oversight. Finally, the article prescribes the development of technical capabilities in order to ensure safety.
Sam Altman is a noted technologist with a passion for applying machine learning and artificial intelligence to solve a variety of problems. Over the years, he has been a part of a number of companies, including since 2015, as president of Y Combinator, the post-seed venture capital firm. Altman currently serves as Chairman of the Board of OpenAI, a nonprofit artificial intelligence research company.
OpenAI is a leading artificial intelligence research company aimed at tackling AI-related issues. They seek to promote trustworthy artificial general intelligence, collaborate with popular companies in the industry, and develop technology that has the potential to profoundly improve human life. They believe that AI has the potential to surpass expert-level skills, greatly boosting productivity and creativity, and ultimately leading to a better world. In order to make sure AI is implemented in a safe and responsible manner, OpenAI is determined to make intentional decisions and actions regarding the development and application of AI.