Since the beginning of 2023, artificial intelligence (AI) as ChatGPT has quickly gained traction. It is currently being used to write poetry, compose essays, and answer obscure questions. Consequently, a variety of new apps have become available for download. However, some privacy experts are hinting at potential privacy risks that come with using the platform.
The European Data Protection Board (EDPB) took notice of this and established the ChatGPT task force to investigate the safety of using ChatGPT and to determine whether new privacy regulations need to be implemented. Sarah Hospelhorn, Chief Marketing Officer at BigID, cautions users of being mindful of the safety of their data while using these apps. She urges consumers to be aware of how their data is being used, as it could be potentially exposed to unauthorized access if there are not any proper management of the data.
Aaron Rafferty, the CEO of Standard DAO, has also stressed the security risks of using ChatGPT, noting that compromising sensitive conversations, as well as unauthorized access to personal information, are the biggest areas of concern. He also added that AI-generated misinformation could potentially violate user privacy and have the potential to manipulate public opinion.
Sameer Ahmed Khan, the co-founder & CEO of Social Champ, believes privacy concerns are limiting the use of ChatGPT in business. He explains that malicious actors have the potential to exploit security gaps that can lead to data theft, without alerting the target. He ultimately suggests applying Microsoft 365 Copilot to stay up-to-date with security measures.
Users of these apps should consider the policies of the apps carefully, as different apps have different privacy policies. ChatGPT apps, like Ai Chat – GPT Chat Bot, provide an assurance that they do not collect any user data. That being said, US regulators are expected to establish policies to reconcile a bearable balance between fostering innovation and upholding user privacy.