ExtraHop Networks Inc. has revealed their newest tool called Reveal(x) which is designed to detect any usage of generative artificial intelligence tools, such as Open AI LP’s ChatGPT. This tool is created to monitor employees’ usage of AI and is aimed to help companies protect against data or intellectual property theft.
The primary data source of the new tool is network packets which enable it to provide visibility of all the data being sent or received from OpenAI domains. With this tool, security teams can easily assess the amount of risk potential connected to each person’s uage of generative AI.
ExtraHop Networks Inc., a cloud-based network detection and response platform company, understands the great worry of data and IP leakage that may occur due to an employee’s use of AI-as-a-Service tools. They have targeted to solve this issue by Examining all the connected devices, users, and the amount of data that is sent or received from OpenAI domains.
Chris Kissel, an analyst with International Data Corporation, explains that the biggest risk for organizations is confidential data and IP leakage. He adds that with its expertise in network intelligence, ExtraHop can give unique visibility related to AI usage, with its Reveal(x) tool.
Overall, with the usage of Reveal(x), companies can audit their level of compliance, protect from possible IP loss, and gain insight on the data being shared with AI-as-a-Service tools. ExtraHop Networks Inc. has significantly upgraded the overall security of organizations’ data with the introduction of its new tool.