advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Slack tries to clear up confusion about what data it uses for AI and ML training

  • Mention of Slack’s AI and ML efforts using customer data got customers riled up last week.
  • The company says that despite what its policy said, it doesn’t use customer data for its machine learning or generative artificial intelligence.
  • When it comes to machine learning, Slack aggregates data and uses it to train systems to recommend emojis or autocomplete channels names.

Confusion regarding Slack’s use of data for artificial intelligence and machine learning training has forced the company to address the matter.

Last week, the chief cloud economist at Duckbill Group Corey Quinn spotted an incredibly poorly worded section of Slack’s data management policy.

“To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement,” the section read.

We say read because this section made tempers flare so much Slack has now had to amend this policy to be clearer.

First off, Slack says that despite what the policy said its only training its AI and ML to recommend things like emoji reactions and channels that are relevant to a user or using timestamps to recommend archiving a chat.

“We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce any customer data of any kind. While customers can opt-out, these models make the product experience better for users without the risk of their data ever being shared. Slack’s traditional ML models use de-identified, aggregate data and do not access message content in DMs, private channels, or public channels,” Slack explained.

The company explains how it uses AI and ML and to what end but folks are on the defensive given how vastly different these two explanations are.

Slack goes on to say that it makes use of third-party LLMs (large language models) and those LLMs are not trained on customer data. It also highlights that it maintains control of all data that passes through its halls.

One of the major points of contention, however, is that customers need to manually opt out of their data being used to train non-generative machine learning models. This involves sending an email to the Slack customer experience team where the request is manually processed. As many have pointed out, this process should be opt-in and not opt-out as a default action.

Opting out doesn’t turn features off but simply prevents Slack from adding your data to an aggregated source which is used to train machine learning models. The company does warn that “distinct patterns in your usage will no longer be optimized for” when opting out.

We understand that Slack uses data to improve its platform but when AI is added to the mix and you’re told that by agreeing to the terms you give Slack the keys to your kingdom without more information, that’s bound to rile folks up.

When it comes to generate AI, Slack is adamant that it isn’t using customer data to train its models.

“Slack does not train LLMs or other generative models on customer data, or share customer data with any LLM providers,” the company outlines. Generative AI is also a premium add-on for Slack so how data is accessed doesn’t apply to all Slack users.

With AI being the buzzword du jour, it may be worth combing through the policies, terms and conditions and user agreements again, just to make sure you’re data isn’t being used to fund somebody else’s payday.

advertisement

About Author

advertisement

Related News

advertisement