Open-source AI tools have been found to contain critical vulnerabilities that pose a threat to security, according to a new report by cybersecurity startup Protect AI Inc. The company, founded in 2022 by former employees of Amazon Web Services Inc. and Oracle Corp., offers products aimed at enhancing the safety of AI applications. As part of its bug bounty program, Protect AI has identified key vulnerabilities in AI and machine learning systems, with over 13,000 community members participating in the program.
One of the primary findings of the bug bounty program and research conducted by Protect AI is that the tools used in the supply chain for building machine learning models have unique security risks. As many of these tools are open source, they may have vulnerabilities that can result in complete system takeovers. Examples of such vulnerabilities include unauthenticated remote code execution and local file inclusion.
The report highlights specific vulnerabilities discovered in the widely-used MLflow tool. One vulnerability involved a critical flaw in the code for pulling down remote data storage, which could potentially allow attackers to execute commands on users’ systems. Another security issue was an Arbitrary File Overwrite vulnerability, which could enable malicious actors to remotely overwrite files on the MLflow server. Finally, the report details a Local File Include issue that could inadvertently expose sensitive information or lead to a complete system takeover.
Protect AI ensured that all the vulnerabilities disclosed in the report were shared with the tool maintainers at least 45 days prior to publication. These findings emphasize the need for robust security measures in AI and machine learning tools, considering their access to critical and sensitive data.
In conclusion, open-source AI tools have been found to have critical vulnerabilities that can compromise security. Protect AI’s bug bounty program has uncovered specific vulnerabilities in widely-used tools, highlighting the importance of implementing stringent security measures. This report serves as a reminder to organizations to prioritize the security of their AI and machine learning systems to protect sensitive information and prevent unauthorized access.