Slackbot Is Now Your Company-Mandated Personal Assistant

The new Slackbot shows how corporate AI shifts from optional tool to mandatory infrastructure

Slack is transforming Slackbot into a full AI assistant that can sift through messages, files, and integrated apps to write plans, summarize content, and draw insights. According to The Verge, Slack’s Chief Product Officer Rob Seaman said they rebuilt Slackbot “from the ground up as a personalized AI companion,” and that the models run inside Slack’s AWS boundary in a way that “no data leaves the firewall, [and] no data is used in the training of the models at all”

The new Slackbot is in beta with about 70,000 users and is expected to roll out broadly by the end of the year or early 2026, according to reports.

Companies can disable Slackbot via admin settings, but individual users cannot opt out. Slack’s own privacy framework treats workspace owners as the gatekeepers for data control.

Who Controls Slackbot?

Slack’s “Privacy Principles: Search, Learning and Artificial Intelligence” page states: “Our guiding principle … is that the privacy and security of Customer Data is sacrosanct.” It also asserts: “We do not develop generative AI models using Customer Data.” But it continues that for predictive models, such as emoji or channel recommendations ,Slack’s systems “analyze Customer Data (e.g. messages, content, and files).” 

The same page provides the opt-out mechanism in exact words:

“To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at feedback@slack.com with your Workspace/Org URL and the subject line ‘Slack Global model opt-out request.’ We will process your request and respond once the opt-out has been completed.”

Because that is the only described opt-out, individual users have no mechanism to refuse participation directly.

The 2024 Hacker News thread “Slack AI Training with Customer Data” made this public. A user quoted Slack’s policy, saying:

“Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out … feedback@slack.com … we will process your request …”

That thread also includes comments about silent opt-in and missing user-level opt-out options.

Slack responded in public statements and in that community thread by clarifying that Slack AI is “a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data,” and that the data “remains in-house and is not shared with any LLM provider.” 

Credit: Salesforce

Even so, only workspace owners can disable or modify Slack’s AI features. The Slack help article on managing AI access says exactly that: “Owners and Admins can enable or disable access to AI features.” There is no mention of a per-user opt-out. 

Slackbot’s new role depends on reading across workspaces. It can retrieve files, scan channel context, and generate summarized responses. Although Slack draws a line between “training” and “inference,” users are left to interpret what those distinctions mean in practice. Because both involve exposure of message content to automated systems, the difference is subtle to end users.

Default Exposure as Product Strategy

The fact that no user-level opt-out exists is significant for organizations in regulated industries,  banks, legal teams, healthcare,  where message content may be sensitive. If Slackbot processes internal content, even temporarily, companies may need logs or documentation of what was seen. At present, Slack has not published an audit trail of Slackbot’s data access.

Slack has taken reactive steps too. After a vulnerability in the third-party extension Struct Chat leaked private user data, Slack revoked its API access and changed policies about how third-party apps can store message data. Slack’s own changelog says that in May 2025 it updated rate limits and terms for non-marketplace apps to limit how many messages could be fetched per minute, to reduce bulk data access by unvetted apps.

These external fixes help prevent unauthorized exfiltration, but they do not address the internal transparency gap. Slack’s trust page continues to call customer data “sacrosanct” and its policies maintain that generative models do not use customer messages or files.

But by making privacy a feature controlled at the organizational level, not by individual users, Slack is testing whether its reputation as a privacy-first communications platform can survive this shift. As Slackbot transitions from reminder tool to active AI collaborator, many users may find they have less control, not more, over how work conversations get mined.

📣 Want to advertise in AIM Media House? Book here >

Picture of Mukundan Sivaraj
Mukundan Sivaraj
Mukundan covers the AI startup ecosystem for AIM Media House. Reach out to him at mukundan.sivaraj@aimmediahouse.com.
Global leaders, intimate gatherings, bold visions for AI.
CDO Vision is a premier, year-round networking initiative connecting top Chief
Data Officers (CDOs) & Enterprise AI Leaders across major cities worldwide.

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.