AI startup Modular secured $250 million in a funding round that raised its total external capital to $380 million and placed its valuation at $1.6 billion.
Founded in 2022 by software architects Chris Lattner and Tim Davis, Modular is building a software platform designed to allow AI developers to run applications efficiently across different chip architectures without needing to rewrite code for each individual hardware platform.
Building a Unified Software Layer
Chris Lattner, Modular’s CEO and co-founder, is known for his work on the LLVM compiler infrastructure at Apple. He and co-founder Tim Davis, who has a background in large-scale cloud infrastructure from Google, envisioned a platform that abstracts the complexities of AI hardware to enable flexibility across GPUs, CPUs, ASICs, and custom silicon.
Lattner explained, “What we’re focused on is not like pushing down Nvidia or crushing them. It’s more about enabling a level playing field so that other people can compete”
Modular has created its own programming language called Mojo to bridge developers’ preferred languages like Python with high-performance compiled code targeted at AI accelerators. This engineering allows developers to write once and deploy anywhere, reducing the cost and delay of adapting AI models for different infrastructures.
Currently the platform serves cloud providers like Oracle and Amazon, as well as major semiconductor companies including Nvidia and AMD. This cross-hardware compatibility helps enterprises avoid vendor lock-in and choose hardware optimized for cost and performance. As AI workloads grow increasingly complex and resource-intensive, companies want freedom from dependency on a single chip provider.
Challenging Nvidia’s Market Control
Nvidia commands more than 80% of the premium AI compute market, largely due to its proprietary CUDA platform which integrates software with Nvidia’s GPUs. CUDA dominates AI development and deployment workflows, with over four million active developers relying on this ecosystem globally. This dominance limits options for enterprises because migrating AI workloads between hardware vendors requires costly rewrites and performance tuning.
Modular’s software platform positions itself as a “Switzerland” or neutral ground, creating a unified software layer that works with multiple vendors to enable workload portability. The company’s new funding round was led by the US Innovative Technology Fund and involved existing investors like DFJ Growth, GV, General Catalyst, and Greylock. This investment nearly tripled Modular’s valuation compared to its last round two years ago.
The startup claims that its platform achieves performance gains ranging from 20% to 50% compared to leading AI inference frameworks when running on the latest Nvidia Hopper and AMD MI series GPUs. Oracle and Amazon have integrated Modular’s stack to offer customers multi-vendor support without sacrificing speed or capabilities. Modular also emphasizes its expansion beyond AI inference into training workloads, which constitute a substantial portion of compute demand in research and enterprise deployments.
Lattner remarked on the challenge the company aims to solve: “When I departed Google, I felt somewhat disheartened because I was eager to address this issue. What we came to understand is that it’s not merely about intelligence, funding, or capability. It’s a structural challenge.” By building a compute-neutral software layer, Modular attempts to solve a structural bottleneck holding the industry to a dominant chip ecosystem.
Expanding Engineering and Market Reach
With a current team of around 130 engineers, Modular plans to use the new funds to accelerate hiring and expand its presence internationally, especially in North America and Europe. The platform functions much like a hypervisor in traditional computing by abstracting away the hardware details, making AI development more accessible and efficient.
Software Over Silicon
The AI chip market is forecasted to reach $100 billion in annual spending by 2034, driven by growing enterprise AI adoption and research initiatives that require massive compute power. Nvidia currently leads in revenue and market share, with data center revenue surpassing $30 billion in recent quarters. However, Modular’s approach addresses growing enterprise needs for agility, cost control, and multi-vendor strategies.
Investors view Modular’s platform as revolutionizing the software landscape that governs how AI hardware is accessed and managed. Sam Fort, partner at DFJ Growth, compared Modular to “VMware for the AI era,” a reference to how VMware enabled compatibility and management layers across physical computing hardware.
As AI models grow in size and diversity, software platforms that ensure portability and efficiency across hardware will play an increasingly critical role. Modular’s vision centers on reducing workflow friction by bridging hardware differences. Achieving broad adoption will depend both on performance improvements and the ability to gain developer trust through tools and ecosystem support.
Moderate growth pressures on Nvidia’s dominance, alongside enterprise demand for vendor flexibility, suggest that platform-neutral software will become a defining feature of AI infrastructure in the coming years. Modular’s challenge is to deliver consistent performance, developer usability, and ecosystem support to realize this ambition at scale.