“Where is the product?”
This is the general public sentiment around Thinking Machines Lab: one of the most generously funded AI companies ever to launch.
The company, founded in February 2025 by former OpenAI CTO Mira Murati, closed a $2 billion seed round this week, led by Andreessen Horowitz, at a $10 billion valuation. It has no demos, no announced customers, and no revenue. Yet it commands investor boardrooms. That funding figure surpasses even early rounds raised by AI rivals founded by fellow OpenAI alum like Dario Amodei and Ilya Sutskever. The only asset Thinking Machines has made public, much like Ilya’s Safe Superintelligence, is its founder’s résumé.
Murati was involved in building OpenAI’s most prominent products, including ChatGPT and DALL-E. She also briefly served as interim CEO during the November 2023 boardroom standoff that temporarily removed Sam Altman. Her departure in September 2024 came after growing internal discord at OpenAI. Now, with Thinking Machines Lab, she’s promising a cleaner slate: one grounded in transparency, research rigor, and human-AI collaboration.
A Reputation-Fueled Rocket Launch
Thinking Machines was born into a wave of ex-OpenAI spinouts. Co-founder John Schulman, once OpenAI’s head of alignment, joined the new lab after a brief stint at Anthropic. Several other former OpenAI staff followed, including Barret Zoph (now CTO) and Jonathan Lachman. The team also includes prominent engineers from Meta, Mistral, and DeepMind. It’s a roster designed to attract capital, and it worked.
According to multiple reports, the pitch was high on ambition and low on specificity. The company is structured as a public benefit corporation, with a governance system that gives Murati weighted voting rights on the board. That concentration of power mirrors the centralized structures she reportedly criticized at OpenAI.
Still, investors are betting that this brain trust can build something that challenges OpenAI, Anthropic, and Google DeepMind on the frontier of large language models. Murati has publicly argued that today’s AI systems remain poorly understood, hard to customize, and difficult to trust. Her company’s stated mission is to make them “more widely understood, customizable, and generally capable.”
Yet what that means in practice is still anyone’s guess, especially with the field of Mechanistic Interpretability still in its nascent stages.
In interviews, Murati has suggested that the next breakthroughs won’t just come from bigger models, but from better interfaces and more trustworthy infrastructure.
In theory, this could offer a middle path between the secrecy of OpenAI and the open-source emphasis of groups like Mistral or EleutherAI. In practice, however, Thinking Machines has not yet published meaningful benchmarks, or model weights. Apart from an introductory blog post, the company’s output remains minimal.
The absence of deliverables has raised eyebrows. “These valuations are completely out of control,” one online user posted in reaction to the news. “OpenAI is so huge that a company funded solely on the clout of its former employees is worth $10B.” Others have questioned whether a company prioritizing research transparency and safety can also compete in the resource-intensive race to build foundation models at scale.
A Crowded Landscape with Shrinking Margins
Thinking Machines enters a hyper-competitive field. OpenAI remains dominant, with strategic partnerships and infrastructure support from Microsoft. Anthropic has positioned itself as a safety-first alternative, securing multi-billion dollar commitments from Amazon and Google. Meanwhile, companies like DeepSeek and Mistral are proving that highly capable models can be built with smaller teams and budgets.
Unlike Safe Superintelligence Inc., which has publicly stated it will avoid releasing commercial products until it achieves superintelligence, Thinking Machines appears to be aiming for both: to build transformative models while also pursuing accessible tools and real-world impact.
Murati and her team have described Thinking Machines as a response to the perceived shortcomings of OpenAI. Former employees have criticized the chaotic codebases, limited modality support, and conservative release cadence of their previous employer. By contrast, Murati promises clean infrastructure, flexible systems, and openness.
But in trying to build a principled alternative to OpenAI, the company risks repeating a familiar pattern: vague ambitions dressed in lofty language, with no tangible output to show for it. Without a product (or at least a research artifact) the promise of transparency and AI-human synergy remains just that: a promise.
To her credit, Murati has acknowledged the uphill battle. In a recent interview, she argued that civilization is still in the early stages of AI development and emphasized the need for public understanding and responsible co-evolution with the technology. “The hardest part,” she said, “is figuring out how our civilization co-evolves with the development of science and technology as our knowledge deepens.”