AI Platforms Raise Data Privacy Concerns as Tracking Data Breaches Proves Challenging
Amid the rapidly evolving digital landscape, the emergence of artificial intelligence (AI) platforms has sparked increased worries about data privacy. Platforms like ChatGPT and Google Bard have drawn attention to the challenges of tracking data breaches associated with AI systems. Although these platforms fall under the Digital Personal Data Protection (DPDP) Act, experts highlight the difficulties in monitoring data breaches linked to AI.
A major obstacle lies in the realm of data traceability, which involves understanding how generative AI platforms acquire personal data for model training and subsequently share information in response to user queries. Unlike conventional data handlers who can be easily held accountable for breaches, AI platforms present a unique and complex problem.
Rakesh Maheshwari, a former senior director at the Ministry of Electronics and Information Technology (MeitY), highlighted a potential issue. He revealed that a generative AI platform could collect data but claim it is not meant for Indian users and therefore not covered by the Act.
While these AI platforms adhere to regulations regarding the sharing of personal data, the challenge lies in tracking the data utilized for training. Even if they use personal information without permission, it is challenging to identify which data contributed to specific results.
In addition to concerns about data security, generative AI faces other issues such as copyright infringement, dissemination of false information, and biased algorithms. Vinay Phadnis, CEO of Atomic Loops, pointed out that these generative AI platforms can only control the data until it is used to train their models. Afterward, they have no control over how the information is utilized when responding to prompts.
To address the tracing problem, Phadnis proposed the incorporation of AI signatures at the end of AI-generated responses. These signatures would verify the authenticity of the data used and provide transparency regarding the datasets employed, including whether they comply with security protocols.
In conclusion, the rise of AI platforms has sparked concerns regarding data privacy due to the challenges associated with tracking data breaches. Issues such as data traceability, copyright infringement, false information, and biased algorithms pose significant hurdles. To enhance accountability and transparency, the implementation of AI signatures could help address the tracing problem and shed light on the data sets used. As the digital landscape continues to evolve, finding effective solutions to protect data privacy in AI systems remains a crucial task.