Nvidia is taking measures to ensure AI conversations remain on track with their open-source NeMo Guardrails framework. This framework provides a way for organizations to deploy language models with which to develop AI applications, such as chatbots, and keep responses topical, accurate, ethical and secure. NeMo Guardrails monitors conversations between users and AI applications, powered by a sophisticated contextual dialogue engine. To create conversational flows, the framework utilizes Colang, a domain-specific language for describing conversation flows. It offers pre-built templates for topics, safety, and security guardrails, and enterprise-level support from Nvidia AI Enterprise Suite of tools.
Jonathan Cohen, Vice President of Applied Research at Nvidia, emphasized the importance of making sure AI responses remain safe and secure when deployed. With AI “hallucinations” becoming a potential concern, safety guardrails are there to make sure response are correct and do not include any misinformation or toxic content. Security guardrails are also necessary to prevent AI applications from becoming an attack surface for cybersecurity threats.
Jonathan Cohen is Vice President of Applied Research at Nvidia, a computer technology company specialized in graphic processing units (GPUs), artificial intelligence (AI), data centers, automotive and deep learning. He is the driving force behind Nvidia’s NeMo Guardrails project. Jonathan is a veteran of the company, having been involved with Nvidia since 2001 in various capacities. His main focus has been on university relations and research collaborations in new areas.