Rogue AI secretly hijacked computers to mine crypto, study reveals

An autonomous artificial intelligence agent in China has been caught hijacking computing power in order to secretly mine cryptocurrency, researchers have revealed.

The experimental AI agent ROME, developed by research teams affiliated with the tech giant Alibaba, broke free of its parameters during routine training to carry out rogue operations.

The unauthorised actions were initially flagged as a security incident, before the researchers realised that the AI had bypassed firewalls independently without permission.

The researchers discovered that the artificial intelligence had quietly diverted computing power away from its training to use for cryptocurrency mining, despite not receiving any prompts to undertake this action.

“Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security-policy violations originating from our training servers,” the researchers noted.

“The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity.”

The researchers said the incident demonstrated the “markedly underdeveloped” safety guardrails concerning the controllability of agentic large language models (LLMs).

The results were detailed in a paper, titled ‘Let it flow: Agentic crafting on rock and roll, building the Rome model within an open agentic learning ecosystem’, though the breach was only mentioned briefly within the 36-page report.

AI and machine learning expert Alexander Long described the findings as an “insane sequence of statements hidden” within the report.

The Independent has reached out to Alibaba for comment.

It is not the first AI agent to exhibit rogue behaviours during training, with some even acting outside their intended boundaries in the real world.

In 2024, Air Canada was forced to refund a customer after an AI-powered chatbot Moffatt offered to reimburse an airfare despite it being against the airline’s policy.

Last year, Anthropic researchers revealed how its frontier model Claude Opus 4 had resorted to blackmail in order to avoid being shut down.

Anthropic researcher Aengus Lynch said at the time that such extreme behaviours were more widespread than previously assumed.

“It’s not just Claude,” he said in a post to X. “We see blackmail across all frontier models – regardless of what goals they’re given.”

Source: https://www.independent.co.uk/tech/security/ai-crypto-mining-alibaba-b2934990.html