A group of executives, including Turing Award winners, from OpenAI and DeepMind, along with other experts in the field, have warned that there is a risk of extinction facing humanity from artificial intelligence (AI). They claimed that such a prospect ranked alongside other threats as diverse as pandemics and nuclear weapons, and said that it was necessary to mitigate risk urgently. However, critics have questioned what was meant by the term AI and expressed scepticism about such warnings. Many have also noted a trend of open letters from AI experts focusing on world-ending risks that has recently emerged.
OpenAI executives express concern for the potential dangers of artificial intelligence in public letter
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.