Tech leaders are advocating for the establishment of regulations for artificial intelligence (AI) to ensure its responsible development over the next decade. Sam Altman, CEO of AI research center OpenAI, warns that AI could pose an existential risk to humanity, similar to pandemics and nuclear war. Altman is promoting the benefits of developing superintelligence despite the risks involved. Altman is leading the effort to create artificial general intelligence (AGI), an AI system with human-level intelligence that can solve problems humanity has been unable to. OpenAI is proposing an international regulatory framework for superintelligence modeled on the International Atomic Energy Agency to track computing power devoted to training systems and restricting certain approaches. Altman has become a proponent of universal basic income as a means of redistributing the wealth generated from AI. Altman’s vision of the future is to build increasingly powerful tools, billions, or trillions of copies being used throughout the world, which will dramatically increase individual productivity and quality of life.
What Are the Limits of AI? ChatGPT Founder Considers Whether it Will Save or Destroy Humanity
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.