University of Notre Dame researchers have joined a new consortium aimed at supporting responsible artificial intelligence (AI) and identifying potential risks associated with current AI systems. The consortium, known as the Artificial Intelligence Safety Institute Consortium (AISIC), was established by the National Institute of Standards and Technology in response to a presidential executive order. The executive order highlighted the importance of responsible AI use while acknowledging the potential for societal harms. The consortium includes over 200 member companies and organizations from various sectors that are actively involved in the development and use of AI systems. Its goal is to develop advanced measurement techniques and standards to ensure the safety, security, and trustworthiness of AI. Notre Dame researchers will contribute their expertise to help tackle the challenge of managing AI risks by measuring and understanding them. The focus of the consortium will be dual-use foundation models, which are advanced AI systems used for a wide range of purposes. By improving evaluation and measurement techniques, researchers and practitioners will be able to gain a deeper understanding of the capabilities, risks, and benefits of AI systems, offering guidance to industry leaders. The consortium aims to create AI that is safe, secure, and trustworthy. By joining AISIC, Notre Dame researchers will have the opportunity to contribute to the development of responsible AI and work towards benefiting society as a whole.
University of Notre Dame Joins AI Safety Institute Consortium to Mitigate Risks and Develop Safer AI Systems
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.