Slack, the popular chat platform owned by Salesforce, is facing backlash over its use of customer data for AI training. Users were surprised to learn that their messages were being utilized to train AI models without their explicit consent. This revelation led to an uproar among the Slack community, prompting the company to announce upcoming changes to its privacy policy to address these concerns.
The controversy began when a user pointed out the data usage issue on a developer platform, sparking a viral discussion online. Slack users expressed frustration over the platform’s lack of transparency regarding its AI initiatives. Many were unaware that they had been automatically enrolled in Slack’s AI training program and were required to email the company to opt-out.
In response to the backlash, Slack engineer Aaron Maurer acknowledged the need for clearer privacy policies regarding the use of customer data for AI training. He admitted that the current policy was too vague and did not adequately explain how user data is utilized for AI models.
While Slack claims that it does not train its large language models (LLMs) on customer data, there is a discrepancy between its privacy principles and the actual use of data for AI. Users are calling for more transparency and clarity from Slack regarding its data-sharing practices to ensure their privacy is protected.
Salesforce, the parent company of Slack, has recognized the need for an update to address these concerns and has vowed to make changes to its privacy policies. This incident highlights the importance of transparency in the rapidly evolving AI landscape and the need for companies to clearly outline how user data is used in their terms of service.