Artificial Intelligence

MIT Report Crosses the GenAI Divide

GenAI Divide

It only takes one look at the key findings from MIT’s GenAI Divide report to see why it made such a big splash this week: 95% of GenAI deployments fail.

MIT knows how to grab headlines. The paper – based on interviews with 150 enterprise execs, a survey of 350 employees, and an analysis of 300 GenAI deployments – highlights a clear chasm between the successful projects and the painful lessons.

  • After $30B+ of GenAI spend across all industries, only 5% of organizations have seen a measurable impact to their top lines. Adoption is high, but transformation is rare. 
  • While general-purpose models like ChatGPT have improved individual productivity, that hasn’t translated to enterprise outcomes. Most “enterprise-grade” systems are stalling in pilots, and only a small fraction actually make it to production.

Why are GenAI pilots failing? The report suggests that it’s not the quality of the models, but the learning gap for both the tools and the organizations that’s causing pilots to fail.

  • Most enterprise tools don’t remember, don’t adapt, and don’t fit into real workflows. This creates “an AI shadow economy” where 90% of employees regularly use general models, yet reject enterprise tools that can’t carry context across sessions.
  • Employees ranked output quality and UX issues among the biggest barriers, which both directly trace back to missing memory and workflow integration.

What’s driving successful deployments? There was a consistent pattern among organizations successfully crossing the GenAI Divide: top buyers treated AI startups less like software vendors and more like business service providers. These orgs:

  • Demanded deep customization aligned to internal processes and data
  • Benchmarked tools on operational outcomes, not model benchmarks
  • Partnered through early-stage failures, treating deployment as co-evolution
  • Sourced AI initiatives from frontline managers, not central labs

There’s always a catch. Most of the pushback on the report was due to its definition of “failure,” which was not having a measurable P&L impact within six months. That definition would make “failures” out of everything from the internet to cloud computing, and underscores why enterprise transformation is measured in years, not months.

The Takeaway

The GenAI growing pains might be worse than expected, but that’s helped startups realize that they need to ditch the SaaS playbook for a new set of rules. In the GenAI era, deployment is a starting line, not a finish line.

Get the top digital health stories right in your inbox

You might also like

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Digital Health Wire team

You're all set!