The Nine AI Death Spirals
Imagine asking an airline's AI assistant about bereavement fares after a loved one passes. You get a clear answer and a promised discount. Trusting the official source, you book your flight—only to have the airline refuse to honor it.
Their excuse? The chatbot, their own representative, was somehow a separate entity whose advice wasn't binding.
This actually happened to an Air Canada customer in 2022. It ended with a court ruling against the airline. But the legal outcome isn't the point. The point is what it reveals: AI products break trust in ways traditional software never could.
This story is one tiny crack hinting at a much larger fault line. We're constantly told AI will change our lives for the better. It certainly has potential. So why doesn’t it always live up to the hype?
Why AI Fails Differently
The traditional SaaS death spiral is familiar: Growth slows, users leave, the company panics. They slash prices, chase bad customers, cram in features nobody asked for. The business model breaks down, even if the software still technically works. This is the "Leaky Bucket" problem.
AI product failure is different. It's less like a leaky bucket and more like a broken relationship.
The AI death spiral doesn't usually start with business mistakes or simple bugs. It kicks off when the AI, through its design, its perceived social role, or its confusing opacity, clashes with fundamental human needs: our need for trust, control, safety, and feeling competent.
An AI isn't inert software like Excel. It acts like a collaborator, an assistant, sometimes a creative partner. We interact with it using language. We judge it based on social dynamics. Does it feel helpful and empowering? Or does it feel clumsy, arrogant, or threatening?
When you ask AI to write an email, summarize research, or generate code, you're delegating cognitive work. You're trusting it to handle tasks that require thinking. Because AI can be unpredictable and its inner workings are hidden, this requires a leap of faith. For anything beyond trivial tasks, trust isn't a feature—it's the prerequisite for using the product at all.
Our reactions to AI aren't purely rational. They're driven by neurochemicals that evolved for social relationships:
grabs attention when we see something new and potentially rewarding—like that first demo of AI doing something magical. But is fleeting. It gets users in the door; it doesn't keep them there.
is released slowly through consistent, reliable, positive interactions. When an AI consistently helps and respects our control, we form a bond. It starts to feel like a reliable partner.
Cortisol floods our system instantly when we feel threatened, betrayed, or deeply frustrated. An AI giving dangerous advice, violating privacy, or feeling like a job threat is a cortisol event. One significant spike can shatter the bond permanently.
Understanding this dynamic—that AI failure is primarily about the human relationship—is key to diagnosing the 9 main types of AI death spirals.
The Hype Spiral (Failure of Value)
This is the most seductive death. The product launches to massive excitement, racks up waitlist signups, gets the TechCrunch headline—and then collapses almost instantly.
happens when a team builds an AI product based on the excitement around technology rather than a validated user need. It's the classic "solution hunting for a problem," supercharged by AI hype.
The initial rush fades quickly. Users try the product, find it doesn't solve a real problem significantly better than existing tools, and leave. No genuine value means no reason to stick around. No bond ever forms. The relationship never gets off the ground.
Jibo, the "Social Robot" perfectly embodies this spiral. It raised over $3.7 million in crowdfunding, graced the cover of Time magazine, and was marketed as the first "social robot for the home," an artificially intelligent family companion. This set sky-high expectations for a warm, trustworthy relationship.
The reality? A $900 stationary device with functionality worse than a cheap Amazon Echo. When the company went bankrupt, the cloud servers were shut down, turning expensive "companions" into plastic sculptures. The massive gap between the hyped social role and the disappointing utility led to abandonment and failure.
The warning sign: Users describe your product as "cool" but can't articulate what problem it solves.
The Opacity Spiral (Failure of Explainability)
User trust doesn't collapse suddenly here. It erodes through a thousand paper cuts of uncertainty.
happens when AI outputs feel arbitrary or inconsistent, reasoning is hidden, and users can't predict when it will perform well or poorly. The AI gives a great answer one minute and nonsense the next for similar prompts. It makes recommendations without justification.
The lack of transparency prevents a stable bond. Every interaction carries an undercurrent of risk—"Will it mess up this time? Why should I believe this?"—triggering low-level cortisol. Users learn they can't depend on the AI for critical workflows. They compartmentalize, trusting it only for low-stakes, easily verifiable tasks. The AI never becomes a true partner.
Zillow Offers bet the company on its own "Zestimate" black box. The model, unable to adapt to the volatile post-pandemic market, was wildly inaccurate. Zillow overpaid for thousands of homes. The opaque model's failure was hidden until it resulted in an $881 million loss and the entire division being shut down.
The warning sign: Users mentally categorize your product as a "toy" rather than a dependable tool for real work.
The Trust Collapse Spiral (Failure of Safety)
Unlike the slow erosion of , trust collapse is sudden and catastrophic.
This spiral triggers from a single, visible, dramatic incident: the viral mistake (AI gives harmful advice), the privacy horror story ( misuse revealed), the bias revelation (discriminatory outcomes exposed), or the confident hallucination (AI states dangerous falsehoods with complete certainty).
These events trigger a massive, immediate cortisol response. Trust isn't questioned—it's destroyed. The failure goes public, triggering media scrutiny. Users become hypervigilant. Suddenly every flaw seems magnified.
Meta's Galactic* was launched as a way to organize and access scientific knowledge. It was taken offline within 72 hours. Scientists discovered it confidently generating fake citations, nonsensical equations, and authoritative-sounding misinformation about vaccine safety.
The backlash was swift. Scientists who might have been advocates became vocal critics. Meta's reputation in AI took a hit lasting far beyond Galactica itself. The incident made researchers wary of Meta's AI products generally—trust contamination spreading beyond the single failed product.
The tipping point: When the narrative shifts from "this product failed" to "this product is dangerous." At that point, even perfect technical performance can't restore confidence.
The Threat Spiral (Failure of Social Role)
The AI works. Users hate it anyway.
happens when an AI product is positioned as a replacement rather than an assistant. It takes over core parts of a user's job—especially tasks tied to their professional identity—without giving them sufficient control or agency.
The AI is perceived as a direct threat to identity and security. This triggers powerful cortisol. Users don't just dislike the tool; they actively reject, undermine, or find workarounds to avoid it.
Klarna and Duolingo both made headlines announcing plans to replace significant numbers of human workers with AI. While intended to signal efficiency, this triggered immediate backlash. Users complained about noticeable drops in quality—AI lacking empathy, cultural nuance, or complex problem-solving skills. The negative reaction reportedly forced Klarna into a "major hiring initiative" and Duolingo to for some roles.
Simply replacing humans often backfires by damaging user experience and triggering resistance. The AI failed the human part .
The warning sign: Low adoption rates despite management mandates, or users actively finding workarounds.
The Friction Spiral (Failure of Usability)
The AI is technically brilliant. Using it is so painful that nobody bothers.
happens when a product requires excessive setup, complex prompting, constant manual correction, or doesn't integrate with existing tools. The AI creates "different work," not less work.
The constant hassle creates chronic, low-grade —a cortisol drip. Users weigh against value received. When friction is too high, the relationship feels exhausting. Users eventually break up with the tool and revert to familiar workflows.
Artifact, from the founders of Instagram, promised a perfectly personalized news feed. The personalization required constant user feedback and training. This friction was too high. The app shut down with the founders concluding "the market opportunity isn't big enough to warrant continued investment."
The real issue: users weren't willing to do the work the AI demanded of them.
The warning sign: Usage drops below the level needed to sustain the product, and the cost of reducing friction becomes prohibitive.
The Economics Spiral (Failure of Unit Costs)
Every AI query costs money. When costs exceed what users will pay, companies face impossible choices.
starts when usage scales faster than monetization, power users consume disproportionate resources, and optimization hits limits. The company must cut costs or raise prices—and every option damages the user experience.
Rate limiting frustrates users mid-workflow. Price increases make loyal early adopters feel punished. Model downgrades visibly degrade quality. Each "solution" creates new problems.
Users who had an bond with a capable product suddenly find it's worse. This betrayal is a powerful cortisol event. Widespread complaints about models "getting dumber" show how sensitive users are to this.
Character.AI offered unlimited AI conversations. Power users generated millions of messages, driving server costs through the roof. They had to throttle users and break promises to stay solvent.
The tipping point: When you've exhausted all the painful levers—raised prices, throttled usage, downgraded models—and still can't reach profitability. You're burning cash to serve an angry, shrinking user base.
The Obsolescence Spiral (Failure of Defensibility)
Your AI product is rapidly outpaced or commoditized. A competitor ships something better. A platform bundles your core feature for free. An open-source model matches your proprietary tech.
Your unique value evaporates. Growth stalls. Less money for R&D means you fall further behind. More users defect. The spiral accelerates.
AI products are especially vulnerable because they often lack traditional moats: no unique (just wrapping a third-party model), no real switching costs (moving between AI writing tools is trivial), no network effects (individual users don't make it better for others), and rapid commoditization (cutting-edge features become standard everywhere in months).
Jasper AI built a successful business as an AI copywriting tool, reaching $125M ARR by wrapping GPT-3 in a marketing-focused interface. But when ChatGPT launched with a consumer-friendly interface at lower pricing, Jasper's differentiation evaporated. Why pay $99/month when ChatGPT Plus costs $20 and does the same tasks?
They weren't disrupted by a better mousetrap. They were disrupted by mousetraps becoming cheap enough to bundle elsewhere.
The warning sign: Users can't articulate why they should pay for your product versus alternatives.
The Regulatory Spiral (Failure of Compliance)
Legal, regulatory, or ethical problems compound until the product becomes too risky to operate.
This spiral starts with a lawsuit over training , a regulatory investigation into discriminatory outcomes, a safety incident, or public backlash over privacy violations. Problems compound globally—legal whack-a-mole across jurisdictions. Compliance becomes a patchwork nightmare. Resources shift from innovation to defense.
Clearview AI built facial recognition by scraping billions of images from the web without consent. This triggered a global cascade: lawsuits, massive fines, deletion orders from regulators across multiple countries, cease-and-desist letters from major platforms, and bans on police use in several cities.
Clearview is trapped in a tightening noose of legal battles, restricted markets, and ethical controversy—constantly diverting resources to defense instead of development.
The tipping point: When operating legally becomes impossible in key markets, liability outweighs profitability, or the brand becomes permanently synonymous with harm.
The Degradation Spiral (Failure of Performance)
This is the "."
The AI's actual performance gets objectively worse over time—not just perceived quality, but real degradation. Causes include drift (the world changes but training doesn't), model collapse (training on AI-generated content), adversarial pollution (bad actors corrupting signals), or infrastructure decay (optimizations that sacrifice quality for cost).
Users gradually lose confidence. They use the product less. Less usage means less feedback for improvement. The quality gap versus competitors widens. More users leave—especially power users who notice first. Even less valuable comes in. The spiral accelerates.
Google's AI search summaries (SGE) faced this dynamic. Early users reported that AI summaries were increasingly generic, sometimes contradicted the search results below them, or stated outdated information. As users learned to distrust the summaries and scroll past them to organic results, Google got less signal about which AI answers were actually helpful.
The tipping point: When user perception shifts from "this sometimes makes mistakes" to "this is fundamentally unreliable." Recovery requires a quantum leap in quality—but by then, the and resources to make that leap may no longer exist.
The Pattern Beneath the Spirals
If you look across these nine failures, you'll notice something: none of them are primarily technology problems.
Jibo didn't fail because the API was down. It failed on value.
Air Canada didn't fail because of latency. It failed on reliability.
Klarna didn't fail because of bugs. It failed on social role.
To survive, you have to stop treating AI as a tech spec and start treating it as a relationship.
The products that make it through aren't necessarily the smartest or most capable. They're the ones designed to be trusted—products that enhance human capability rather than threatening it, that show their work rather than hiding behind opacity, that respect the human need for control even when automation is possible.
Build for the relationship, or build for the graveyard.