Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.
We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse.
Summary
- AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy.
- Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI.
- Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data.
Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far).
Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.
Why would a more advanced AI feel less reliable than its predecessors? One reason is that these systems are only as good as their training data, and the data we’re giving AI is fundamentally flawed. Today, this data largely comes from an information paradigm where engagement trumps accuracy while centralized gatekeepers amplify noise over signal to maximize profits. It’s thus naive to expect truthful AI without first fixing the data problem.
AI mirrors our collective information poisoning
High-quality training data is disappearing faster than we create it. There’s a recursive degradation loop at work: AI primarily digests web-based data; the web is becoming increasingly polluted with misleading, unverifiable AI slop; synthetic data trains the next generation of models to be even more disconnected from reality.
More than bad training sets, it’s about the fundamental architecture of how we organize and verify information online. Over 65% of the world’s population spends hours on social media platforms designed to maximize engagement. We’re thus exposed, at an unprecedented scale, to algorithms that inadvertently reward misinformation.
False stories trigger stronger emotional responses, so they spread faster than the corrective claims. Thus, the most viral content — i.e., the one most likely to be ingested by AI training pipelines — is systematically biased towards sensation over accuracy.
Platforms also profit from attention, not truth. Data creators are rewarded for virality, not veracity. AI companies optimize for user satisfaction and engagement, not factual accuracy. And ‘success’ for chatbots is keeping users hooked with plausible-sounding responses.
That said, AI’s data/trust crisis is really an extension of the ongoing poisoning of our collective human consciousness. We’re feeding AI what we’re consuming ourselves. AI systems can’t tell the truth from noise, because we ourselves can’t.
Truth is consensus after all. Whoever controls the information flow also controls the narratives we collectively perceive as ‘truth’ after they’re repeated enough times. And right now, a bunch of massive corporations hold the reins to truth, not us as individuals. That can change. It must.
Truthful AI’s emergence is a positive-sum game
How do we fix this? How do we realign our information ecosystem — and by extension, AI — toward truth? It starts with reimagining how truth is created and maintained in the first place.
In the status quo, we often treat truth as a zero-sum game decided by whoever has the loudest voice or the highest authority. Information is siloed and tightly controlled; each platform or institution pushes its own version of reality. An AI (or a person) stuck in one of these silos ends up with a narrow, biased worldview. That’s how we get echo chambers, and that’s how both humans and AI wind up misled.
But many truths in life are not binary, zero-sum propositions. In fact, most meaningful truths are positive-sum — they can coexist and complement each other. What’s the “best” restaurant in New York? There’s no single correct answer, and that’s the beauty of it: the truth depends on your taste, your budget, your mood. My favorite song, being a jazz classic, doesn’t make your favorite pop anthem any less “true” for you. One person’s gain in understanding doesn’t have to mean another’s loss. Our perspectives can differ without nullifying each other.
This is why verifiable attribution and reputation primitives are so critical. Truth can’t just be about the content of a claim — it has to be about who is making it, what their incentives are, and how their past record holds up. If every assertion online carried with it a clear chain of authorship and a living reputation score, we could sift through noise without ceding control to centralized moderators. A bad-faith actor trying to spread disinformation would find their reputation degraded with every false claim. A thoughtful contributor with a long track record of accuracy would see their reputation — and influence — rise.
Crypto gives us the building blocks to make this work: decentralized identifiers, token-curated registries, staking mechanisms, and incentive structures that turn accuracy into an economic good. Imagine a knowledge graph where every statement is tied to a verifiable identity, every perspective carries a reputation score, and every truth claim can be challenged, staked against, and adjudicated in an open system. In that world, truth isn’t handed down from a single platform — it emerges organically from a network of attributed, reputationally-weighted voices.
Such a system flips the incentive landscape. Instead of content creators chasing virality at the expense of accuracy, they’d be staking their reputations — and often literal tokens — on the validity of their contributions. Instead of AI training on anonymous slop, it would be trained on attributed, reputation-weighted data where truth and trustworthiness are baked into the fabric of the information itself.
Now consider AI in this context. A model trained on such a reputation-aware graph would consume a much cleaner signal. It wouldn’t just parrot the most viral claim; it would learn to factor in attribution and credibility. Over time, agents themselves could participate in this system — staking on their outputs, building their own reputations, and competing not just on eloquence but on trustworthiness.
That’s how we break the cycle of poisoned information and build AI that reflects a positive-sum, decentralized vision of truth. Without verifiable attribution and decentralized reputation, we’ll always be stuck outsourcing “truth” to centralized platforms, and we’ll always be vulnerable to manipulation.
With them, we can finally move beyond zero-sum authority and toward a system where truth emerges dynamically, resiliently, and — most importantly — together.
Source: https://crypto.news/ais-blind-spot-machines-cant-separate-truth-from-noise/