Samsung, a multinational technology company, recently instated a temporary ban on the use of generative artificial intelligence (AI) tools, such as ChatGPT, Google Bard, and Bing, following an internal data leak in April 2023. It is suspected that this data exposed proprietary information to potential malicious actors, and to ensure the safety of its systems, Samsung has moved to develop its own AI technology to replace the existing tools.
Matt Fulmer, Cyber Intelligence Engineering Manager at Deep Instinct, highlighted the dangers of these tools in conversation with Digital Journal. He offered an insight into the background of the incident, pointing to a lack of understanding of how generation AI (LLM) works even amongst the developers of the technology. He further suggested that the root of the issue is predominantly human error, in the way the tools were used and the lack of knowledge of their capabilities.
To efficiently train these AI models, Fulmer explained that large datasets must be used which can include sensitive and confidential information, both of which can easily be pulled from conversations with the AI and as a result, leaked outside of the organization if not properly monitored. Furthermore, cyber criminals can also use such datasets in order to generate payloads targeted at a particular entity, which could potentially be used for malicious purposes.
As a solution, Fulmer recommends that companies update their security policies, implement an “Acceptable Use Policy”, and restrict the usage of generative AI until suitable measures are put in place for its secure use. Additionally, companies should be aware of potential threats and take steps to limit the exposure of sensitive data.
Currently, Samsung has yet to develop its own AI technology but with the ban in place on existing tools, its aim appears to be increased safety and protection of its systems.