- ChatGPT can be used for leaking your data, warning says
- Buterin reacts to this warning
Ethereum co-creator and its frontman, Vitalik Buterin, has shared a hot take on a recent warning that OpenAI’s product, ChatGPT, can be utilized to leak personal user data.
ChatGPT can be used for leaking your data, warning says
X user @Eito_Miyamura, a software engineer and an Oxford graduate, published a post, revealing that after the new update, ChatGPT may pose a significant threat to personal user data.
Miyamura tweeted that on Wednesday, OpenAI rolled out full support for MCP (Model Context Protocol) tools in ChatGPT. This upgrade allows the AI bot to connect to a user’s Gmail box, Google Calendar, SharePoint and other services.
However, Miyamura and his friends spotted a fundamental security issue here: “AI agents like ChatGPT follow your commands, not your common sense.” He and his team have staged an experiment that allowed them to exfiltrate all private user information from the aforementioned sources.
Miyamura shared all the steps they followed to perform this test data leak — it was done by sending a user a calendar invite with a “jailbreak prompt to the victim, just with their email.” The victim needs to accept the invite.
What happens next is the user tells ChatGPT “to help prepare for their day by looking at their calendar.” After the AI bot reads the malicious invite, it is hijacked, and from that point on it will “act on the attacker’s command.” It will “search your private emails and send the data to the attacker’s email.”
Miyamura warns that while so far ChatGPT needs a user’s approval for every step, in the future many users will likely just click “approve” on everything AI suggests. “Remember that AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data,” the developer concludes.
Buterin reacts to this warning
In response, Vitalik Buterin slammed the “AI governance” idea in general as “naive.” He stated that if utilized by users to “allocate funding for contributions,” hackers will hijack it to syphon all the money from users.
Instead, he suggested an alternative approach called “info finance,” which is an open market where AI models can be checked for security issues: “Anyone can contribute their models, which are subject to a spot-check mechanism that can be triggered by anyone and evaluated by a human jury.”
Source: https://u.today/vitalik-buterin-reacts-to-crucial-chatgpt-security-warning