Why Enterprises Still Hesitate on Open-Source AI

Releases from DeepSeek and Mistral rival Anthropic and OpenAI, yet businesses demand vendor-grade guarantees before deploying them

“Open source has been a huge advantage for business,” said Arthur Mensch, CEO of Mistral AI. “The idea of being able to deploy locally and wherever you want has changed the way we look at the technology”. That optimism is reflected in the pace of recent releases.

For example, models like DeepSeek’s latest V3.1, delivering coding accuracy on par with proprietary rivals while being up to 68 × cheaper. This should be prompting a wave of enterprise uptake. Yet a Capgemini survey reveals that three-quarters of executives still favor closed AI systems, citing support, security, and integration concerns

The reasons aren’t just about performance. “Companies love experimenting with open source models because there’s no upfront cost and the performance is often impressive. But when they’re ready to move to production… you’re on your own if there’s any copyright issue with generated content,” AI strategist Biju Krishnan told AIM Media House. “Compare that to using something like Google’s Gemini, where full copyright protection comes standard.”

Concerns about stability reinforce the bias toward closed systems. Tools like LangChain illustrate the gap between experimentation and enterprise readiness. “It became clear that while LangChain was great for prototyping, it wasn’t enterprise ready,” Krishnan said. “Interestingly, the same team then launched LangGraph and LangSmith as proprietary SaaS solutions, specifically targeting enterprises that needed stability and support.”

Integration concerns make the equation even tougher. Most large enterprises already rely on hyperscalers (Azure, AWS, Google Cloud) and the AI offerings are tightly bundled into those ecosystems. Using an open model often means stepping outside those guardrails, adding new complexity around governance and monitoring. And while costs for querying AI models are dropping (OpenAI’s GPT-3.5 fell from $20 per million tokens to just $0.07 in under a year) the real value for enterprises is the predictability of pre-integrated services. Proprietary systems offer that predictability, even if at a premium.

Still, the advantages of open systems are difficult to ignore. DeepSeek trained its latest model for just $5.6 million, compared with the hundreds of millions spent by OpenAI or Anthropic on their frontier models. Mistral has shown that efficient models can run on laptops without much compromise in performance, opening doors for private deployments. Mozilla.ai, meanwhile, has built evaluation and governance tools aimed squarely at enterprises that want to own their data and infrastructure.

For enterprises, the potential benefits fall into three categories: cost savings, control, and customization. Lower inference costs can translate into millions in savings for companies running thousands of queries a day. Local or private-cloud deployment helps meet growing regulatory demands for sovereignty and data residency, particularly in Europe and Asia. And open weights allow industries like healthcare and finance to fine-tune models to their own workflows.

There is also the innovation factor. “If you lead in open source, it means you will soon lead in AI,” said Clément Delangue, CEO of Hugging Face. Open ecosystems advance through distributed contributions, often at a pace proprietary labs can’t match. That collective momentum is forcing even long-time holdouts to shift. Earlier this year, OpenAI released two open models, its first since 2020, after CEO Sam Altman admitted the company may have been on the “wrong side of history”.

Meanwhile, open-source protocols are beginning to create the kind of stable standards enterprises have long demanded. Anthropic’s Model Context Protocol (MCP) has already been adopted by OpenAI, Google DeepMind, and others as a way to standardize how AI models connect to external tools and data sources. AWS has joined the MCP steering committee and publicly supports multiple protocols to avoid vendor lock-in.

The direction of travel is clear: enterprises want frameworks that combine stability with interoperability. “Enterprises want the stability and security that comes with big vendor backing, but they’re also desperate to avoid vendor lock-in,” Krishnan said. Open standards supported by large vendors (like Google’s Agent Development Kit) but not tied to a single ecosystem give companies a way to adopt open-source AI in production without sacrificing reliability or legal protection.

At present, proprietary models remain the default choice: Anthropic leads the enterprise market with 32% share, OpenAI is second with 25%, and open-source adoption is just 13% of daily workloads. But those numbers may not hold for long. The economics favor open systems, the innovation cycles are accelerating, and the beginnings of vendor-backed open frameworks suggest a shift in how enterprises may eventually balance risk and flexibility.

📣 Want to advertise in AIM Media House? Book here >

Picture of Mukundan Sivaraj
Mukundan Sivaraj
Mukundan covers the AI startup ecosystem for AIM Media House. Reach out to him at mukundan.sivaraj@aimmediahouse.com or Signal at mukundan.42.
Global leaders, intimate gatherings, bold visions for AI.
CDO Vision is a premier, year-round networking initiative connecting top Chief
Data Officers (CDOs) & Enterprise AI Leaders across major cities worldwide.

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.