Amidst the risks of hallucinations, data leaks, and issues with compliance, Arthur AI has developed a solution to protect against these issues: Arthur Shield. On July 11-12th, Arthur AI will be hosting executives and leaders in San Francisco to discuss their AI investments and the integration and optimization of AI investments. Developed in 2018, this New York City-based organization has raised $60 million dollars in order to fund machine learning monitoring and observability. Some of the most influential companies that Arthur AI has as customers are three of the top five United States banks, Humana, John Deere, and even the United States Department of Defense.
Arthur AI is named after Arthur Samuel, who credited for coining the term machine learning back in 1959, as well as developing early models on record. With this launch, there is a firewall installed that checks data both entering and exiting for risks and policy violations. Adam Wenchel, the founder and CEO of Arthur AI, states the reason the company has developed this technology due to the difficulty in deploying large language models (LLMs).
Nvidia is another company that has introduced their NeMo Guardrail technology, which provides a policy language to protect LLMs from data leakage. Of course, Adam Wenchel has his own thoughts on this, noting the Guardrails technology is mainly meant for developers, while Arthur Shield is a tool specifically made for organizations to protect against real world attack.
Arthur Shield uses pre-made filters that are continuously learning and customizable; these filters will help prevent any sensitive or toxic data from entering or exiting the model. Wenchel further notes that the monitoring aspect of this technology is not only sophisticated, but necessary in order to effectively monitor the output of the firewall.
All in all, with the new innovation of Arthur Shield, organizations can have better protection to ensure the safety of their data and private data from malicious activities. With this implementation, organizations are better equipped to protect against data leakage, hallucinations, and any policy violations that can occur.