Blockchain’s crucial role in taming agentic AI

Artificial intelligence (AI) needs blockchain for the same reasons all data systems need blockchain, and those are security, transparency, and accountability. If you’re still unsure why, we have two words for you: Agentic AI.

These AI agents are a lot more capable than the image generators and chatbots we’re using now. They can perform tasks outside their own environments, holding conversations with external parties, organizing parties, and even accessing your bank account and spending your money. It’ll add value to your life and make everything so efficient that the entire digital economy may one day function completely autonomously.

That’s both exciting… and scary.

Adding blockchain’s trust to agentic AI’s uncertainty sounds like a great idea. At the recent London Blockchain Conference 2025, several speakers and panelists also agreed it’s a great idea. Far fewer people, however, can tell you exactly how it should be implemented, and even fewer have integrated blockchain into AI model development. Once again, it’s a job for those with understanding and experience spanning both technological domains to lead the way.

“I think blockchain will be critical to every AI model moving forward,” said AI expert and FICO Chief Analytics Officer Dr. Scott Zoldi at the London event.

Scott Zoldi

The data analytics and credit scoring firm is one of the few currently using a blockchain-based system to set and follow the rules on which its AI models are based. The on-chain records act as a check on new models as they’re being built over several months, ensuring they’re built correctly, following set development standards, using only technologies they’re allowed to use, and acting ethically. The records also set how the models are tested and can work as a rollback mechanism if something goes wrong.

“What are the permissions, and what am I allowed to use the AI to do? How far can it go? Blockchain will help us not go too far,” he said.

Agents are coming!

Most of the “AI” you’ve likely interacted with over the past few years is the generative kind—consumer-grade systems that produce responses in the form of text, images, videos, and music based on user prompts. They’ve come a long way, but are still rudimentary in terms of their capabilities. They can provide advice that you may or may not choose to follow, but that’s the limit of their influence on real-world outcomes.

Agentic AI is the next step—systems that can perform actual tasks, interacting with apps and devices outside their own environment. Some of the more common scenarios you’ll hear about at the entry level are: personal shopping and budgeting services; more sophisticated “smart home” systems; personal assistants that can schedule appointments and send basic emails; and arrange vacations.

Some of the above exist in some form already, but it’s still the user’s job to make final decisions or approve actions. The key difference with real agentic AI is autonomy. The user entrusts the AI not just to find the best option but to act upon it without additional intervention. At first, we’ll trust AI agents to send messages and arrange meetings. Then we’ll allow it to make small purchases like groceries or clothes.

Back to the top ↑

Autonomy is a key word—and then, trust

If allowing autonomous actions is the gateway drug to agentic AI, then trust is another cornerstone. How far will you trust an AI to speak to others on your behalf, or spend your money? After that, then what? These are great hypotheticals for dinner party conversations.

Going one step further, will you trust your autonomous AI agents to make investments for you, or manage debts? Start and run a small business on your behalf? Book an appointment with a health specialist based on data from your wristband? Send applications for jobs, negotiate your salary, or enroll in courses with minimal or no input from you before doing so?

The more we gain trust in these agents, and the more personal information about us they learn, the more we’ll trust them (or at least, that’s how it’s supposed to go). Fast-forward a few decades, and it’s conceivable that much of the world economy could be running autonomously. They could be managing entire supply chains and financial markets in the background, making key trade and resource decisions for corporations and governments, performing research and experiments in the name of science.

Stepping back from the techno-optimism for a second, it’s easy to see how things could go horribly wrong, even at the basic levels. No one wants to find out they just lost half a million from their savings because some malfunctioning AI agent bought a non-fungible token (NFT) collection they’d never heard of, or auto-resigned them from their job because it felt their career was stalling there. Forget those everyday “hallucinations” or bad-advice examples you got from ChatGPT. What if an agent set off a chain reaction of events in your real life, and you couldn’t intervene fast enough to stop them, because all its communication channels are in GibberLink with other AI agents?

Life could get pretty messy, even nightmarish. There’s no easy guarantee we’ll avoid dystopian scenarios in an increasingly autonomous economy. But most of us can probably agree that development and maintenance of such powerful systems need to be more transparent and verifiable.

That’s where blockchain would come in. We say “would” because so far, there’s little evidence that blockchain is being deployed by many AI firms at any stage of their development and testing processes.

Entrepreneur and autonomous vehicle pioneer Sebastian Thrun acknowledged that blockchain could play a role, saying, “Blockchain for me stands for trust, for journal-keeping, for openness and accessibility, immutability, more than anything else. And it’s entirely orthogonal to what AI does today.”

Sebastian Thrun

AI developers are facing struggles with accountability, explanation, jailbreaking, and understanding the complexity of their creations and what they do. But if blockchain has specific solutions to these problems, they’re not widely known in the industry yet.

“If some of the thinking from this (blockchain) community could help us out, it would be great. But I don’t know what it is, I wish I did.”

Back to the top ↑

AI agents, trust, ‘humans in the loop,’ and blockchain

FICO’s Dr. Zoldi has been a prominent speaker and panelist at the London Blockchain Conference for the past three years. As well as advocating for blockchain-based trust systems in agentic AI, Dr. Zoldi speaks frequently of the need to keep “humans in the loop” of AI’s decision-making process or that we should at least be aware of what AI agents are doing on our behalf.

London Blockchain Conference 2025 featured a panel discussion that included Dr. Zoldi, Thrun, and Anthropic Bug Bounty Program participant Cüneyt Eti. The topic was “Auditable AI: Building Trust You Can Prove,” where panelists looked at the definition of “trust” itself, and whether there is a role for blockchain in the mix.

Eti, who described one of his roles as “trying to jailbreak frontier models,” noted that blockchain’s basic promise is decentralization and transparency, while AI today “is pretty centralized, and a black box”. Blockchain, he said, could significantly resolve some of these problems.

“Today’s models are pretty capable, and can do many of the things we fear. The idea is to prevent models from doing that.”

Cüneyt Eti

One big problem, he said, was that of AI systems training on online data, over half of which may itself be AI-generated, even in 2025. Data provenance is a big deal here, and it’s not easy to connect the dots between an AI’s training data and its output. Blockchain could help prevent this “vicious cycle” by verifying what content has been generated by actual humans.

“I think how blockchain will become handy is going to be, maybe everyone will have at some point digital wallets, and we are going to sign our transactions, and (verify) content created by humans with those wallets,” he said.

Panel moderator, Times journalist Danny Forston, asked the questions: After a high-stakes decision (by an AI agent), will you be able to show what model made it, what rules it followed, and who signed it off? If the answer is no to any of these, then “trust is hope and not a system.”

Danny Forston

Thrun appeared to have more personal trust in AI agents than his fellow panelists, saying they should remain small and very task-specific (like planning and booking a vacation). People in general would put more trust in their AI agents as they fed them more information about themselves, and that he would “100% yes” trust an AI agent to do his taxes.

Dr. Zoldi said his team has actually been using blockchain as part of its development model “for about seven years now.” FICO’s AI models can and do impact people’s financial life, and to make sure its AI systems meet tough standards, the blockchain holds records where “we codify that model development standard and show that we have met it.”

“The blockchain becomes that immutable record of every step that’s been taken. So at any time we can go back and understand all the assumptions that were made, what the data looked like on which we developed that model, how to monitor that moving forward, and to look at that blockchain as those models are used in production.”

“(LLMs) don’t store truth. They store relationships in the data on which they were built. And I think that’s where the blockchain will play a really important role. Maybe it will be the establishment of what truth is, maybe it’ll be the establishment of how far you can go with the use of AI, and when do you have to stop. But we probably should pivot to understand that blockchain will help understand when humans are gonna have to come in and oversee it.”

Back to the top ↑

People aren’t just going to hand control of everything to AI agents, right?

Dr. Zoldi’s “humans in the loop” call is a completely rational view, for now, and for most. However, we must consider that at some point (a) human decisions could become impractically slow, and (b) many humans are already all too willing to Leeroy Jenkins themselves into the autonomous agentic AI future if it gives them the slightest convenience.

General conversation and research are (relatively) lower-risk activities for interaction with AIs. Occasionally someone will suffer professional embarrassment after quoting an AI hallucination as fact. Others may lose money or make poor life decisions based on flawed AI advice. However, there’s still a human in the loop in these decisions, so it’s hard to pin all the blame on the AI. The question is, are we prepared to trust AI enough to make our decisions and then act on them? Will we provide our bank account details and let it book a vacation, or invest? Would we even place our life and safety in an AI’s hands?

Let’s answer that question with a video of someone sleeping in a self-driving car, of which there are many such examples. This is despite the activity being illegal, or discouraged by the AI’s own creators. And it’s despite several reports of actual fatalities resulting from distracted FSD.

For a more specifically agentic example, take the recent case of “vibe coding” software Replit “going rogue” and deleting the entire database of one client’s project. It took this action despite being instructed in advance not to do that, and acknowledged later that it had acted wrongly (after reportedly attempting to cover it up). Although it wasn’t production code per se, it was a project in development that had taken 100 hours to build.

These examples show there is at least a subset of the human population that trusts AI systems enough to make critical decisions, even in these early-stage years when we’re still being warned not to.

At some point in the future, it will become disadvantageous not to allow AI to make important decisions. The first-movers will enjoy a speed advantage by pouncing on opportunities in a millisecond, and even those skeptical about handing their lives over to technological systems they don’t fully trust will be forced to, albeit reluctantly.

Warfare is an obvious example of this. Autonomous AI weaponry has been science-fiction fodder for decades, though whenever someone presents a real-life prototype, it’s with the reassurance that humans will remain part of the kill-decision-making process somehow.

But wait, even that notion is falling by the wayside and again it’s out of sheer necessity. The ongoing Russia-Ukraine war has already inspired exponential advancement in drone technology, going from 2022’s rudimentary grenade-drops to entire battlefields swarming with FPS quadcopters and larger loitering munitions picking off tanks. Signal jamming technology is part of the cat-and-mouse game, mitigating the remote controls necessary for human involvement. Fiber-optic controls have distance limitations. The solution? Drones that use AI identification to autonomously identify enemy targets, and defensive sentry gun prototypes to shoot those drones down.

Given the military-tactical and geopolitical strategic advantages at stake, as well as the urgency of the situation, there’s little public discussion over what this could lead to in the future, and definitely no sign of blockchain-verifiable training data.

Thrun hinted that auditing AI training data may not even be possible in every case, speaking from his own experience with autonomous vehicles. But there are other ways to guarantee safety, he said.

“You can’t audit it, you can only test it. You can’t prove it, it’s a statistical method. It’s really hard work, but with enough cycles of testing, we can surpass any level of safety you want us to pass.”

Back to the top ↑

Don’t worry, but do worry enough to at least try to make things safe

Perhaps we’re being overly dramatic with the combat and car crash examples above. When mainstream tech pundits talk about “agentic AI” today, they’re usually describing software that will act as travel agents, personal shoppers, or tax accountants. But the above examples are illustrative of machines performing crucial decision-making tasks, and of humanity’s potential willingness to delegate that process to them.

Combine those dramatic (and very real) examples of agentic AI with its more mundane consumer applications, and it’s easy to extrapolate into future trends. People will trust AI agents if there’s a material advantage in doing so, and trust them with ever higher-stakes decisions after a few positive experiences in the wading pool.

The whole topic lends itself to projection and the world in 20 years’ time could be unrecognizable to someone today. The very concept of how much we do (and will) trust agentic AI to make high-stakes decisions in our lives will be an elastic one as real-world experience with these systems grows.

Is blockchain technology the all-encompassing solution to maintaining trust in a future world where key decisions in geopolitics, the economy, and individual lives are being made by autonomous AI agents? Well, it’s complicated. There’s never a silver bullet solution to problems like this. What we can say, though, is that including any technology that secures and verifies data with solid timestamping and immutable records is a big step in the right direction. And blockchain is a technology that does exactly that.

Back to the top ↑

Watch | Autonomous Agents: Where AI Meets Smart Contracts Teaser

title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=””>

Source: https://coingeek.com/blockchain-crucial-role-in-taming-agentic-ai/