Remember when companies swore they’d build their own CRM or trading platform? We’ve been here before. The buy vs build debate has haunted C-suites since the first enterprise software salesman walked through the door. But now, with generative artificial intelligence (GenAI) and large language models (LLMs) dominating almost every cafe chatter or board meeting, that age-old question has taken on new urgency, and some eye-opening answers.
Last year, we published the article “An Age-Old Question: Buy Or Build Software (In-House)?” exploring the classic software dilemma: should you build it yourself or buy off the shelf? The consensus back then leaned cautiously optimistic about in-house builds for those with deep pockets and deeper technical benches. Fast forward to today, and the Massachusetts Institute of Technology (MIT) just dropped a bombshell that should make every dealmaking investor rethink their AI strategy.
MIT’s Networked Agents and Decentralized AI (NANDA) initiative recently published “The GenAI Divide: State of AI in Business 2025”, and the findings are nothing short of startling. After analyzing over 300 initiatives, conducting 52 interviews, and reviewing 153 survey responses, the initiative discovered something that should give every tech-ambitious executive pause: 95% of in-house GenAI projects fail to deliver measurable business value.
We’re talking about $30-40 billion in enterprise investment, and only 5% of these homegrown initiatives are achieving rapid revenue growth or making any real dent in the profit and loss statement. The rest of the fund is stuck in a pilot limbo, burning cash while producing little more than impressive demos and disappointed stakeholders.
The MIT NANDA report reveals that most GenAI failures aren’t happening because the models are flawed or the technology isn’t ready. The tech works. The problem is everything around it.
The biggest killer is what researchers call the “learning gap”. Companies are deploying LLMs that lack contextual learning, memory, and real adaptability. They’re building sophisticated tools that don’t actually integrate into how people work. They're building sophisticated tools that don't actually fit how people work.
Then there’s the budget. Most firms are pouring AI investment into high-visibility areas like sales and marketing because that’s where executives want to see magic happen. Meanwhile, back-office automation, the unsexy stuff like invoice processing and compliance workflows, shows far higher ROI but gets a fraction of the attention and resources, according to the report.
And perhaps most telling is the shadow AI economy. While IT departments labor over enterprise-grade in-house technology, employees are quietly solving their problems with ChatGPT and Claude on their personal accounts. It’s the consumerization of AI, and it’s happening whether leadership acknowledges it or not.
Not everyone’s struggling, though. The MIT study found that specialized external vendors succeed roughly 67% of the time, while internal builds clock in around 33%. That’s a two-to-one advantage for the buy vs build equation, and it’s hard to ignore those odds when you’re staking millions on an outcome.
The standout success stories are agile startups that are laser-focused on solving one specific pain point. Some scaled from zero to $20 million in annual revenue by avoiding the trap of trying to be everything to everyone. They built narrow, deep solutions rather than broad, shallow ones.
Interestingly, organizational structure matters too. When line managers (not just IT departments) drive in-house AI adoption, projects tend to choose more adaptable tools and see better outcomes. The people closest to the problems make better technology choices than those furthest from them.
However, despite the surge in GenAI use across workplaces, companies are surprisingly seeing little measurable return on investment. According to a recent BetterUp and Stanford survey, AI doesn't automatically boost productivity. In the U.S., 40% of employees said they've received AI-generated “workslop” in the past month — content that actually creates more cleanup work.
So what does this mean for those of us writing checks or running companies? The MIT report offers some practical wisdom:
First, seriously consider buying over building. Unless you’re an AI company yourself, specialized external tools will likely serve you better than custom in-house solutions. The time-to-value is faster, the risk is lower, and the success rate is double.
Second, decentralize your experimentation. Stop treating GenAI implementation like a top-down IT project. Empower teams to pilot use cases agilely. Let them fail fast, learn faster, and iterate without requiring board approval for every tweak.
Third, insist on adaptive AI tools. The technology should learn and integrate into your actual workflows, not the other way around. If your people are working around the system rather than with it, you’ve already lost.
Fourth, measure what matters. Stop evaluating AI projects by technical specs and start measuring business metrics. Revenue impact. Cost savings. Time to resolution. The model’s accuracy score means nothing if it’s not moving the needle on outcomes that affect your bottom line.
For dealmaking investors, the MIT research carries particular weight. When you’re evaluating portfolio companies or potential acquisitions, their AI strategy isn’t just a tech question, but a capital allocation question. Are they burning resources and building when they should be buying? Are they chasing AI initiatives in highly visible areas while ignoring high-return opportunities?
This is precisely why platforms like Cyndx have focused on building AI-powered tools specifically designed for the investor and financial community. Rather than forcing dealmakers to shoulder the risks and costs of building in-house AI capabilities (with that daunting 95% failure rate), we offer specialized solutions for target identification, relationship mapping, and deal sourcing that investors can leverage immediately.
Take Scholar, our deep research and due diligence tool, for example. Unlike generic AI models that require you to piece together information from multiple sources, Scholar delivers everything in one platform. Built specifically for advisors, corporates, and research teams who need to move fast and go deep, it creates comprehensive 30+ page research reports in minutes, pulling from both our proprietary data on over 31 million companies and trusted external sources, using agentic AI workflows to validate, synthesize, and cite insights.
The GenAI divide isn’t just about who adopts AI and who doesn’t, but about who adopts it intelligently versus who just follows the herd. We’ve learned this lesson before with every major technology wave. The companies that won weren’t always the ones with the most impressive in-house technology. They were the ones that knew how to leverage the right tools at the right time.
Don’t be part of the 95% of failed projects. Contact us now.