Slack AI Vulnerability Could Have Exposed Data From Private Channels: Report

This article has been updated to note the vulnerability has been patched and to include a statement from Salesforce.

Slack’s AI assistant had a security flaw that could let attackers steal sensitive data from private channels in the popular workplace chat app, security researchers at PromptArmor revealed this week. The vulnerability exploited a weakness in how the AI processes instructions, potentially compromising sensitive data across countless organizations.

In response to the report, a spokesperson from Salesforce—which owns Slack—told Decrypt that the vulnerability had been fixed.

“We launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data,” the spokesperson said. “We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data.”

Slack also posted an official update on the issue.

Here’s how the hack worked.

An attacker created a public Slack channel and posted a cryptic message that, in actuality, instructed the AI to leak sensitive info—basically replacing an error word with the private information.

Image: PromptArmor

When an unsuspecting user later queried Slack AI about their private data, the system pulled in both the user’s private messages and the attacker’s prompt. Following the injected commands, Slack AI provided the sensitive information as part of its output.

The hack took advantage of a known weakness in large language models called prompt injection. Slack AI wasn’t able to distinguish between legitimate system instructions and deceptive user input, allowing attackers to slip in malicious commands that the AI then followed.

This vulnerability was particularly concerning because it didn’t require direct access to private channels. An attacker only needed to create a public channel, which can be done with minimal permissions, to plant their trap.

“This attack is very difficult to trace,” PromptArmor noted, since Slack AI didn’t cite the attacker’s message as a source. The victim saw no red flags, just their requested information served up with a side of data theft.

The researchers demonstrated how the flaw could have been used to steal API keys from private conversations. However, they warned that any confidential data could have potentially been extracted using similar methods.

Image: PromptArmor

Beyond data theft, the vulnerability opened the door to sophisticated phishing attacks. Hackers could have crafted messages that appear to come from colleagues or managers, tricking users into clicking malicious links disguised as harmless “reauthentication” prompts.

Slack’s update on August 14 that expanded AI analysis to uploaded files and Google Drive documents widened the potential attack surface dramatically. Hackers may not have even needed direct Slack access: a booby-trapped PDF could have done the trick.

PromptArmor says its team responsibly disclosed their findings to Slack on August 14. After several days of discussion, Slack’s security team concluded on August 19 that the behavior was “intended,” as public channel messages are searchable across workspaces by design.

“Given the proliferation of Slack and the amount of confidential data within Slack, this attack has material implications on the state of AI security,” PromptArmor warned in its report. The firm chose to go public with its findings to alert companies to the risk and encourage them to review their Slack AI settings after learning about Slack’s apparent inaction.

Slack AI, introduced as a paid add-on for business customers, promises to boost productivity by summarizing conversations and answering natural language queries about workplace discussions and documents. It’s designed to analyze both public and private channels that a user has access to.

The system uses third-party large language models, though Slack emphasizes that these run on its secure infrastructure. It’s currently available in English, Spanish, and Japanese, with plans to support additional languages in the future.

Slack has consistently emphasized its focus on data security and privacy. “We take our commitment to protecting customer data seriously. Learn how we built Slack to be secure and private,” Slack’s official AI guide states.

While Slack provides settings to restrict file ingestion and control AI functionality, these options may not be widely known or properly configured by many users and administrators. This lack of awareness could leave many organizations unnecessarily exposed to potential future attacks.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/246002/slack-ai-flaw-exposes-private-information