Google has released the Secure AI Framework (SAIF), consisting of recommended measures for businesses to protect their artificial intelligence (AI) models from unauthorized access. The framework was created to mitigate the risks that come with integrating AI capabilities into products. Adhering to a responsible framework will be even more critical as AI advancements become more ubiquitous, according to Google executives. The six core ideas proposed by the Secure AI framework include regularly testing AI systems, setting up an AI risk-aware team to mitigate business risks, leveraging automation in cyber defenses, conducting security reviews of AI models, expanding threat intelligence research, and implementing existing security controls to new AI systems. Google claims that SAIF is capable of protecting businesses’ neural network code and training dataset from theft and blocking a variety of other threats, such as poisoning of training data and the injection of malicious inputs.
Meanwhile, Microsoft has announced a commitment to transparency and fostering a culture of trust in the organization regarding potential damage and abuse arising from unbridled AI deployment. Through its Customer Commitments principle, the company unveiled three innovative AI Customer Commitments. Anthony Cook, Corporate Vice President and Deputy General Counsel of Microsoft, stated that these commitments aim to allay concerns about the risks associated with AI and encourage ethics in AI practices.