A split-scene illustration shows the gap between enterprise AI ambition and real business results, from confident growth projections to failed pilots, falling ROI, and broken workflows.
The widely cited claim that 85% of AI projects fail is a misquote. Current data suggests reality is worse. A 2025 S&P Global survey found 42% of companies abandoned most AI initiatives before production. That figure was 17% just one year earlier. An MIT study reported 95% of generative AI pilots delivered no measurable impact on revenue.
Why It Matters
The “85%” figure traces back to a February 2018 Gartner press release. Research VP Jim Hare predicted that through 2022, 85% of AI projects would deliver erroneous outcomes. He attributed this to bias in data, algorithms, and management teams. Over time, commentators stripped that specific warning of context. It became the universal shorthand “AI projects fail” – repeated by vendors, consultants, and executives worldwide.
The actual failure rate remains stubbornly high across every major study. Boston Consulting Group reported in 2024 that 74% of companies struggle to show tangible value from AI. RAND Corporation found AI projects fail at twice the rate of non-AI IT efforts. Gartner warns that organizations will abandon 60% of AI projects through 2026 due to insufficient AI-ready data.
The costs are not abstract. Zillow lost over $500 million and cut 2,000 jobs after its AI home-buying algorithm overvalued properties. IBM’s Watson for Oncology consumed $62 million at MD Anderson Cancer Center without treating a single patient. In 2024, McDonald’s pulled its AI drive-thru system from 100 restaurants. The technology failed to reliably process orders. These collapses share one thread: organizations bought technology before understanding the problem.
What’s Next
The root causes are well-documented. They are rarely technical. RAND Corporation identified misunderstanding the core problem as the single most common reason AI projects fail. Harvard Business School professors Ayelet Israeli and Eva Ascarza reached the same conclusion in Harvard Business Review. Most AI initiatives fail not because models are weak. They fail because organizations are not built to sustain them.
Data quality is the biggest practical obstacle. Gartner found 63% of organizations lack confidence in their data management practices for AI business deployments. BCG adds that 70% of AI challenges stem from people and process issues. Only 10% involve algorithms. Companies that succeed focus on fewer use cases. BCG found leaders average 3.5 initiatives versus 6.1 for laggards. They redesign workflows before selecting technology.
The gap between leaders and laggards is widening. McKinsey found that organizations generating real value from AI are twice as likely to redesign end-to-end workflows first. They also commit more than 20% of digital budgets to AI as an ongoing capability – not a one-time AI research experiment.
The 85% number was never quite right. But the warning behind it grows more urgent every year. Most organizations remain unprepared for the discipline AI demands – even as global investment accelerates past $630 billion.
Sources: Gartner · S&P Global via CIO Dive · BCG · RAND Corporation · Harvard Business Review · MIT via Fortune · McKinsey
