AI training with messages: Slack responds to criticism
Users are angry that Slack is feeding user data to AI models. Slack insists that not all AI is the same. There is also an opt-out.
The screen of an iPhone shows the messenger app Slack.
(Image: dpa, Fabian Sommer/dpa)
Salesforce's chat and communication software Slack, which is widely used in the corporate context, is currently facing strong headwinds from users who fear that their messages have been used to train Slack's generative AI products. In February, the company introduced Slack AI and touted it as a practical tool for summarizing conversations, among other things.
Videos by heise
Opt-out via email
The controversy surrounding AI training with user data was sparked by Corey Quinn from Duckbill, who highlighted an excerpt from Slack's data protection principles in social media posts. The original post on X is no longer available. In the excerpt shown, Slack acknowledges the right to use user data to train its "global models" unless users actively opt out. However, it is not possible for individual users to opt out; instead, the person responsible for the organization that pays for Slack must request the opt-out from Slack support. The users' frustration was vented in a thread on Hacker News, among other places.
No training for generative AI products
Aaron Maurer, software engineer at Slack, wrote in a post on Threads that Slack does not use user data to train LLMs. Slack has since responded to the backlash in an official statement and adjusted the wording in its privacy policy. It now states that customer data only flows into the training of non-generative AI/ML models. These are used, for example, to recommend emojis or channels. The models could not reproduce user data such as messages, and Slack would not have access to it. Slack AI, on the other hand, relies on LLMs from third-party providers and data remains in the customer's workspace. Slack does not develop its own LLMs or other generative AI models with user data. (ndi)