OpenAI has announced that it will provide funding for researchers to ensure the safety of superintelligent AI systems. The company is offering $10 million in grants through its Superalignment Fast Grants program. The aim is to support research that focuses on controlling AI systems that are more intelligent than humans.
OpenAI hopes that the results from these grants will help in understanding how AI systems can effectively assess the outputs of newer, more advanced AI models. The ultimate goal is to build an AI lie detector. However, OpenAI acknowledges that fully comprehending superhuman AI systems is a challenging task.
To address this challenge, OpenAI is reaching out to individual researchers, non-profit organizations, and academic labs by offering grants ranging from $10,000 to $100,000. Additionally, graduate students can apply for the OpenAI-sponsored $150,000 Superalignment Fellowship. The company is particularly interested in supporting those who are new to working on alignment research, and no prior experience in this field is required.
OpenAI’s recent research indicates the existence of seven practices that can help ensure the safety and accountability of AI systems. The company is now looking to fund further studies to answer open questions that arose from previous research.
Vinod Khosla, an investor in OpenAI, has expressed concerns about the potential risks associated with superintelligent AI. In a recent statement, Khosla suggested that China posed a more significant threat than sentient AI.
Agentic AI systems are mentioned in the context of superintelligent AI. These systems refer to AI that can perform a wide range of actions and autonomously act on complex goals on behalf of the user. OpenAI researchers emphasize the importance of making agentic AI systems safe by minimizing failures, vulnerabilities, and potential abuses.
OpenAI’s efforts to ensure the safety of AI systems come at a time when cybercriminals are exploiting AI-powered chatbots, such as OpenAI’s ChatGPT, for malicious purposes.
OpenAI’s initiative to fund research aims to address the significant challenge of ensuring the safety of superintelligent AI systems. By supporting researchers and providing grants, the company hopes to make progress in controlling and assessing the outputs of advanced AI models. In doing so, OpenAI aims to enhance the safety and reliability of AI systems, ultimately benefiting society as a whole.