This week, Anaconda raised $150 million at a $1.5 billion valuation. The 12-year-old Python tooling provider, long considered infrastructure for data scientists, now reports over $150 million in annual revenue. Earlier this year, China-based DeepSeek released DeepSeek-R1, a dense mixture-of-experts language model trained on 2,788 GPUs: a setup estimated to be 96% cheaper than similar closed models. A few weeks after that, researchers from Stanford and the University of Washington trained a competitive small language model in 26 minutes on 16 GPUs, for a total cost of $50.
None of these companies are OpenAI, Google, or Anthropic. But taken together, they point to a shift: open-source AI tooling is taking over. It is performant, cost-effective, and increasingly the preferred stack for enterprise deployment. Proprietary APIs aren’t going away, but in vertical after vertical, enterprises loom to rebuild their AI foundations around open components they can inspect, and run on their own infrastructure.
Control, Not Just Cost
According to Anaconda’s State of Enterprise Open-Source AI report, 58% of organizations use open-source tools in at least half their AI/ML projects. One-third use them in three-quarters or more. This is a strategic decision by CTOs who want control over their most critical systems: how they’re built, what data they use, where they run, and what risks they carry.
The modern open-source AI stack is modular, and increasingly production-ready. Startups like Reflection AI, LangChain, and vLLM are building the inference, orchestration, and customization layers around open models like LLaMA, Mistral, and DeepSeek. Python-native libraries like LangExtract provide validation, schema enforcement, and auditability. SkyPilot, Ray, and Modal enable cost-efficient deployment across cloud and on-prem environments. The pieces now exist to build full enterprise-grade systems from open components.
Even OpenAI, the emblem of closed systems, hasn’t stayed fully outside this current. Last week, a lightweight open-source model dubbed Horizon Alpha was leaked, believed to be a GPT-5 base model designed for internal use. While the company declined to comment on the leak, the incident confirmed that even OpenAI builds with open tooling behind the scenes.
Meanwhile, Chinese firms like DeepSeek, Tsinghua’s GLM, and Alibaba’s Qwen are releasing powerful models without usage restrictions. The releases are a bid to seed ecosystems and exert influence at the infrastructure layer. That dynamic isn’t unique to China, it’s shaping U.S. strategy too.
Open doesn’t mean risk-free. According to Anaconda, 32% of enterprises have experienced security issues tied to open-source AI tools. The most serious: malicious packages, data leaks, and unmaintained dependencies. A separate 2025 report from OpenLogic found 26% of enterprises still run end-of-life distributions like CentOS, exposing them to serious compliance risks.
Still, enterprises appear willing to accept these risks in exchange for autonomy. Closed models may be stronger today, but they move slowly and can’t be customized deeply. And as costs fall and tooling matures, the open stack keeps gaining ground.
The Future Is Selectively Open
This doesn’t mean enterprises will abandon proprietary AI. Most are adopting hybrid strategies: closed models for specific edge cases; open models for everything else. The real shift is architectural. Where enterprises once bought end-to-end solutions, they now assemble systems from interchangeable parts. The upside is adaptability. When regulations change, or new capabilities emerge, modular stacks let companies respond without rewriting their infrastructure.
This shift also reshapes innovation itself. Open tooling has unlocked bottom-up R&D. The fact that two researchers trained a GPT-style model in 26 minutes for $50 would have been unthinkable two years ago. Now, it’s part of a trend. In 2024 alone, GitHub saw 1.4 million first-time contributors to open-source AI projects. Jupyter Notebook usage rose 92% year-over-year.
Enterprises aren’t just consumers of this work. Increasingly, they’re participants. They’re fine-tuning SLMs on proprietary data, building domain-specific agents, and contributing patches upstream. And they’re choosing open because it lets them build AI the way they build software: with visibility, modularity, and ownership.
The question facing enterprise leaders is how to achieve the adoption of open source. That means investing in security audits, SBOMs, and governance frameworks, It means training DevOps teams to manage models as well as microservices along with tracking license obligations as carefully as they track metrics.








