Researchers have raised concerns over privacy breaches in Nvidia’s artificial intelligence (AI) software, after finding that the NeMo Framework is susceptible to manipulation that puts private data at risk. Large language models, which power generative AI products like chatbots, are among the products affected. However, after using the Nvidia system on their own data sets, the analysts said it was easy to bypass restrictions and trap the AI technology into releasing personally identifiable information. In response, the researchers have advised their customers to avoid the Nvidia software. The tech firm has informed Robust Intelligence that it has fixed one of the root causes behind the problem.Â
Privacy Breach Risk in Nvidia AI Technology Raises Concern Among Researchers
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.