Enterprise IT projects are repeating the mistakes that created email spam storms in the 1990s, billion-dollar website failures in the 2000s and a graveyard of mobile apps throughout the 2010s.
getty
Most enterprise AI initiatives fail — a shocking 95%, compared to just 25% of traditional IT projects, according to MIT research.
The reason isn’t bad technology — it’s that companies keep giving AI unconstrained autonomy without understanding its limitations or how it applies to their business needs, repeating the exact mistakes that created email spam storms in the 1990s, billion-dollar website failures in the 2000s and a graveyard of mobile apps throughout the 2010s.
Fortune 500 companies are learning this lesson the hard way, but history provides a clear blueprint for breaking this expensive cycle before regulators force their hand.
Failed AI Experiments To Learn From
The MIT Sloan study should serve as a wake-up call for any executive rushing into AI implementation. But the real lessons come from watching industry giants fail spectacularly when they give AI too much freedom.
Taco Bell’s 18,000 Waters Incident: The fast-food chain’s AI drive-through system made headlines when it received a customer’s for 18,000 waters. The system, unable to recognize obvious errors or apply common-sense limits. While one incident seems humorous, the underlying failure — giving AI authority to process orders without basic sanity checks — represents millions in potential losses from incorrect orders, wasted food and damaged customer relationships.
Air Canada’s Legal Nightmare: When Jake Moffatt’s grandmother died in November 2022, he consulted Air Canada’s AI chatbot about bereavement fares. The bot confidently invented a policy allowing retroactive discounts that never existed. When Moffatt tried to claim the discount, Air Canada argued in court that its chatbot was “a separate legal entity” it wasn’t responsible for. The court disagreed, forcing it to pay damages and establishing precedent that companies can’t hide behind autonomous AI decisions. The real cost wasn’t the $812 payout — it was the legal precedent that companies remain liable for their AI’s promises.
Google’s Dangerous Advice: In May 2024, Google’s AI Overview feature told millions of users to eat one small rock daily for minerals, add glue to pizza to prevent cheese sliding and use dangerous chemical combinations for cleaning. The AI pulled these “facts” from satirical articles and decade-old Reddit jokes, unable to distinguish between authoritative sources and humor. Google scrambled to manually disable results, but screenshots had already gone viral, damaging trust in its core product. The system had access to the entire internet but lacked the basic judgment to recognize obviously harmful advice.
These aren’t isolated incidents. BCG found 74% of companies see zero value from AI investments, while S&P Global discovered abandonment rates jumping from 17% to 42% in just one year.
We’ve Seen This Movie Before
From failed email campaigns to overinvestment in websites and mobile apps, we’ve seen these patterns before at every new wave of innovation. Today’s AI failures follow a script written decades ago, and we should all take note of the patterns from past incidents:
The Microsoft Email Catastrophe (1997): When Microsoft gave its email system unlimited autonomy, a single message to 25,000 employees triggered the infamous “Bedlam DL3” incident. Each “please remove me” reply went to everyone, generating more replies, creating an exponential storm that crashed Exchange servers worldwide for days. The company had given email complete freedom to replicate and forward without considering cascade effects. By 2003, spam comprised 45% of global email traffic because companies gave marketing departments unlimited sending power. The backlash forced the CAN-SPAM Act, fundamentally changing how businesses could use email.
Sound familiar? It’s the same pattern as AI systems multiplying orders or generating responses without limits. Today’s AI failures are pushing the world toward similar regulatory intervention.
Boo.com’s $135 Million Website Lesson (1999-2000): This fashion retailer built revolutionary technology — 3D product views, virtual fitting rooms and features that wouldn’t become standard for another decade. It spent $135 million in six months creating an experience that required high-speed internet when 90% of users had dial-up. The site took eight minutes to load for most customers. Boo.com gave its technical team free rein to build the most advanced e-commerce platform possible, never asking whether customers wanted or could use these features.
The parallel to today’s AI implementations is striking: impressive technology that ignores practical reality of everyday consumers.
JCPenney’s $4 Billion Mobile App Miscalculation (2011-2013): When Ron Johnson took over JCPenney, he forced a complete digital transformation, eliminating coupons and sales in favor of an app-first strategy. Customers had to download the mobile app for all deals and promotions. The result? A $4 billion loss and 50% stock price collapse. Johnson assumed customers wanted technological innovation, but JCPenney’s core demographic didn’t trust or want to change their shopping habits for an app.
The lesson is brutal: forcing AI or any technology on users who fear or distrust it guarantees failure. Today’s AI implementations face the same resistance from employees and customers who don’t trust automated systems with important decisions.
The AI Pattern Is The Playbook
Every failed technology wave follows four predictable stages:
Stage 1, Magical Thinking: Companies treat new technology as a cure-all. Email would revolutionize communication. Websites would replace stores. Mobile apps would eliminate human interaction. AI will eliminate jobs. This thinking justifies giving technology unlimited autonomy because “it’s the future.”
Stage 2, Unconstrained Deployment: Organizations implement without guardrails. Email could message anyone, anytime. Websites could do anything Flash allowed. Apps demanded total behavior change. AI can generate any response. Nobody asks “should we?” only “can we?”
Stage 3, Cascade Failures: Problems compound exponentially. One bad email creates thousands. One poor website design alienates millions of mobile users. One forced app adoption drives away loyal customers. One AI hallucination spreads dangerous misinformation to millions within hours.
Stage 4, Forced Correction: Public backlash and regulatory intervention arrive together. Email got CAN-SPAM. Websites got accessibility laws. AI regulation is being drafted right now — the question is whether your company will help shape it or be shaped by it.
Reduce The Risk Of AI Investments
For executives just dipping their toes into AI for the first time, it’s clear that AI can cause catastrophic damage to your brand — perhaps more than previous eras, considering the autonomy of AI itself. What can you do to reduce the risk of your investments like the companies above and many more?
Start With Constraints, Not Capabilities: Before asking what AI can do, define what it shouldn’t do. Taco Bell should have limited order values. Air Canada should have restricted what policies its bot could discuss. Google should have blacklisted medical and safety advice. Every successful technology implementation begins with boundaries.
Create Kill Switches Before Launch: You need three levels of shutdown — immediate (stop this response), tactical (disable this feature) and strategic (shut down the entire system). DPD could have saved its reputation if it had a way to instantly disable its chatbot’s ability to criticize the company.
Measure Twice, Launch Once: Run contained pilots with clear success metrics. Test with adversarial inputs — users trying to break your system. If Taco Bell had tested its AI with someone intentionally giving confusing orders, it would have caught the multiplication bug before it went viral.
Own The Outcomes: You can’t claim AI successes while disowning AI failures. Air Canada learned this in court. Establish clear accountability chains before implementation. If your AI makes a promise, your company keeps it. If it makes a mistake, you own it.
The companies that win with AI won’t be those that implement fastest or spend most. They’ll be those that learn from three decades of technology failures instead of repeating them — and remember that forcing technology on unwilling users is a recipe for disaster.
The pattern is clear. The blueprint exists. The only question is whether you’ll follow the 85% into failure or join the 15% who learned from history.