OpenAI, a major player in artificial intelligence research and development, has been publicly advocating for more government regulation of AI technology while privately seeking less oversight. A recently leaked lobbying document revealed that OpenAI argued its large AI models, such as the upcoming GPT-4, should not be considered high-risk even when they are capable of generating potentially dangerous content. The paper suggested amendments to the EU’s proposed AI Act, which was approved last week and will undergo further negotiations before final approval. OpenAI was most concerned about sections of the law classifying AIs as high-risk, as they feared it could lead to tougher regulations and increased red tape. However, the company has previously called for government oversight of AI, even warning that superintelligent AI could be achieved within a decade. The AI Now Institute has called for greater scrutiny of industry lobbying and criticized OpenAI for trying to write its own rules.
OpenAI Calls for Regulation While Quietly Wanting Less of It
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.