Toy manufacturer Mattel is experimenting with generative AI tool ChatGPT to improve cybersecurity practices. However, there are concerns about the accuracy and potential misuse of the tool without human supervision. ChatGPT maker OpenAI published a report revealing that even state-of-the-art models are prone to producing falsehoods. Employees using ChatGPT are receiving training on how to use generative AI tools securely. Cyber teams are advised to ask specific questions with objective answers to reduce inaccuracies. Companies like Goldman Sachs and legal publisher RELX are experimenting with the tool, while others, including JPMorgan and Verizon, have banned it due to concerns. Mattel’s manager, Vu Le, doesn’t believe that generative AI has reached a leap of faith moment yet, where companies can rely on it without human oversight. Nevertheless, it is important to have appropriate data restrictions in place before implementing ChatGPT or similar tools.
Mattel Tests ChatGPT for Cybersecurity Experiments
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.