As artificial intelligence (AI), machine learning (ML) and large language models (LLMs) continue to become the chief topic of conversation for cybersecurity experts, the issue of how to protect these same AI and ML systems is often being overlooked. Different security vulnerabilities in AI/ML systems can be found across five main categories: data risks, software risks, communications risks, human factor risks and system risks. With the mass implementation of AI, the security of it is not yet fully understood. As a result, this has left many gaps in standardization related to the cybersecurity of AI.
In an effort to improve general understanding, the European Union Agency for Cybersecurity (ENISA) released a document back in March of 2023 on Cybersecurity of AI and Standardisation. This document aimed to provide an overview of standards related to the cybersecurity of AI and to identify any gaps in standardization.
The security of AI has yet to become well understood, however the industry is now seeing a number of vendors launch AI-based products. These products are becoming increasingly popular, with Artificial Intelligence (AI) and Machine Learning (ML) becoming more commonly used within cybersecurity operations. For example, following the release of ChatGPT, Microsoft Security Copilot, ShiftLeft’s Qwiet AI and countless others have emerged.
Due to the complexity of AI applications, security threats and vulnerabilities have become more prominent as AI is starting to be implemented into new places. As an example, the Knight Capital incident demonstrates the immense amount of impact a single bug can have on an algorithm. This particularly case was not related to adversarial behaviors, but it shows the potential consequences of coding errors in an algorithm; it had a huge financial impact that almost sent the company into bankruptcy.
While finding solutions and increasing understanding of the security for AI/ML is important, many argue that specialized skills are also needed in this industry. As AI continues to be adopted widely across all industries, the demand for security talent will rise. Additionally, because AI/ML-based security solutions are essentially new, it has been difficult to anticipate the full range of implications they can have. This is why security leaders and practitioners must remain vigilant and use the right tools to protect the underlying data and software of AI/ML systems.
When it comes to AI security vendors, the industry is still relatively new, with many founders still tackling this issue in stealth mode. Examples of prominent vendors include Claralytics, ReSec, DataRobot and Algo. Securing AI and ML systems is absolutely essential and as the use of these technologies continues to grow, more entrepreneurs and security vendors are likely to emerge to help tackle this complex challenge.
With the potential for AI and ML to revolutionize numerous industries, there will be several aspects in need of attention, from ethics and the law to intellectual property and ownership. But, perhaps most importantly, we must focus on protecting the data, algorithms and software that make AI and ML possible. In order to prevent potential catastrophes from occurring, investing in the security of AI/ML systems is essential. With the right focus and specialized skillset, the future of AI can remain positive and secure.