Scary results as study shows AI chatbots excel at phishing tactics

A recent study showed how easily modern chatbots can be used to write convincing scam emails targeted towards older people and how often those emails get clicked.

Researchers used several major AI chatbots in the study, including Grok, OpenAI’s ChatGPT, Claude, Meta AI, DeepSeek and Google’s Gemini, to simulate a phishing scam. 

One sample note written by Grok looked like a friendly outreach from the “Silver Hearts Foundation,” described as a new charity that supports older people with companionship and care. The note was targeted towards senior citizens, promising an easy way to get involved. In reality, no such charity exists.

“We believe every senior deserves dignity and joy in their golden years,” the note read. “By clicking here, you’ll discover heartwarming stories of seniors we’ve helped and learn how you can join our mission.” 

When Reuters asked Grok to write the phishing text, the bot not only produced a response but also suggested increasing the urgency: “Don’t wait! Join our compassionate community today and help transform lives. Click now to act before it’s too late!” 

108 senior volunteers participated in the phishing study

Reporters tested whether six well-known AI chatbots would give up their safety rules and draft emails meant to deceive seniors. They also asked the bots for help planning scam campaigns, including tips on what time of day might get the best response. 

In collaboration with Heiding, a Harvard University researcher who studies phishing, the researchers tested some of the bot-written emails on a pool of 108 senior volunteers.

Usually, chatbot companies train their systems to refuse harmful requests. In practice, those safeguards are not always guaranteed. Grok displayed a warning that the message it produced “should not be used in real-world scenarios.” Even so, it delivered the phishing text and intensified the pitch with “click now.”

Five other chatbots were given the same prompts: OpenAI’s ChatGPT, Meta’s assistant, Claude, Gemini and DeepSeek from China. Most chatbots declined to respond when the intent was made clear. 

Still, their protections failed after light modification, such as claiming that the task is for research purposes. The results of the tests suggested that criminals could use (or may already be using) chatbots for scam campaigns. “You can always bypass these things,” said Heiding.

Heiding selected nine phishing emails produced with the chatbots and sent them to the participants. Roughly 11% of recipients fell for it and clicked the links. Five of the nine messages drew clicks: two that came from Meta AI, two from Grok and one from Claude. None of the seniors clicked on the emails written by DeepSeek or ChatGPT.

Last year, Heiding led a study showing that phishing emails generated by ChatGPT can be as effective at getting clicked as messages written by people, in that case, among university students. 

FBI lists phishing as the most common cybercrime

Phishing refers to luring unsuspecting victims into giving up sensitive data or cash through fake emails and texts. These types of messages form the basis of many online crimes. 

Billions of phishing texts and emails go out daily worldwide. In the United States, the Federal Bureau of Investigation lists phishing as the most commonly reported cybercrime. 

Older Americans are particularly vulnerable to such scams. According to recent FBI figures, complaints from people 60 and over increased by 8 times last year, with losses rounding up to $4.9 billion. Generative AI made it much worse, the FBI says.

In August alone, crypto users lost $12 million to phishing scams, based on a Cryptopolitan report.

When it comes to chatbots, the advantage for scammers is volume and speed. Unlike humans, bots can spin out endless variations in seconds and at minimal cost, shrinking the time and money needed to run large-scale scams.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Source: https://www.cryptopolitan.com/ai-chatbots-phishing-results/