The Claude by Anthropic app logo appears on the screen of a smartphone in Reno, United States, on November 21, 2024. (Photo by Jaque Silva/NurPhoto via Getty Images)
NurPhoto via Getty Images
For some time, the business model of the relatively few cybercriminal geniuses who create sophisticated malware such as ransomware has been to offer their malware and services to less sophisticated cybercriminals on the Dark Web , sometimes even providing the delivery systems in return generally for a percentage of the ransom received. However, AI has rapidly changed this model as even less sophisticated criminals are able to leverage artificial intelligence to perpetrate a variety of scams.
AI tools can harvest vast amounts of data from social media and publicly available sources to enable cybercriminals to create specifically targeted phishing emails called spear phishing emails that are more likely to be trusted by their targeted victims, often luring them into providing personal information that can lead to identity theft or making a payment under some pretext.
Readily available deepfake video and voice cloning technology has enabled cybercriminals to perpetrate a variety of scams including the grandparent scam or family emergency scam and the Business Email Compromise scam which in the past relied on social engineering tactics primarily done through emails in which the scammer would pose as a company executive and convince lower level employees to authorize a payment as directed by the scammer under some pretense. This scam, which largely began in 2018, accounted for losses worldwide of more than $55 billion between October 2023 and December 2023 according to the FBI.
Now with the advent of AI and the use of deepfake technology and voice cloning technology, scammers have upped the stakes. In 2024 the engineering firm Arup lost $25 million to cybercriminals who posed as the CFO of the company in deepfaked video calls and persuaded a company employee to transfer the money.
But things aren’t as bad as you think. They are far worse.
Anthropic, the company that developed the Claude chabot recently released a report in which it detailed how its chatbot had been used to develop and implement sophisticated cybercrimes. The report described the evolution of the use of AI by cybercriminals to not only use AI as a tool to develop malware, but to use it as an active operator of a cyberattack, which they referred to as “Vibe-hacking.” The report gave the example of one cybercriminal based in the UK described as GTG-5004 who used Claude to find companies that would be vulnerable to a ransomware attack by scanning thousands of VPN endpoints to find vulnerable systems, determine how to best penetrate the companies networks, create malware with evasion capabilities to steal sensitive data, deliver the malware, steal the data and sift through the data to determine which data could be best used to extort the hacked company and even use psychology to craft emails with their ransom demands. Claude was also utilized to steal financial records of the targeted company to determine the amount of Bitcoin to be demanded in exchange for not publishing the stolen material.
In one month GTG-5004 used Claude to attack 17 organizations involved in government, healthcare, emergency services and religious institutions making demands of between $75,000 and more than $500,000
GTG-5004 then began selling ransomware on demand services to other cybercriminals on the Dark Web with different levels of packages including encryption capabilities, and methods designed to help hackers avoid detection. It is important to note that the report indicated that unlike in the past when technologically sophisticated criminals would sell or lease on the Dark Web the malware they personally created, the report indicated that “This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques or Windows internals manipulation without Claude’s assistance.”
The result is that a single cybercriminal could now do what previously would take an entire team skilled in cryptography, windows internals and evasion techniques to create ransomware and automatically make both strategic and tactical decisions regarding targeting, exploitation and monetization as well as adapt to defensive measures encountered. All of this lowers the bar for criminals seeking to commit cybercrimes.
The report also detailed how its AI capabilities were being misused by North Korean operatives who used it to obtain remote jobs at tech companies. According to the report, “Traditional North Korean IT worker operations relied on highly skilled individuals recruited and trained from a young age within North Korea. Our investigation reveals a fundamental shift: AI has become the primary enabler allowing operators with limited technical skills to successfully infiltrate and maintain positions at Western technology companies.”
The report described how using AI, Korean operators who could not write basic code on their own or communicate in English are now able to successfully pass interviews and get jobs at tech companies earning hundreds of millions of dollars annually that funds North Korea’s weapons programs. Making the matter even worse, through AI, each operator can maintain multiple job positions in American tech companies that would have been impossible without the use of AI
Anthropic has responded to the threats they identified by banning the accounts associated with these operations and developing a tailored classifier to specifically identify this type of activity and institute detection measures into its already existing safety enforcement systems. Additionally, Anthropic shared its findings with other companies as well as the security and safety community to help them recognize and defend against the threats posed by criminals using AI platforms. However, the threat of the use of AI for cybercrimes looms large.
The Anthropic report is indeed a wakeup call to the entire AI industry.
Source: https://www.forbes.com/sites/steveweisman/2025/09/02/ai-is-making-cybercrime-easier-for-unsophisticated-criminals/