When OpenAI first captured global attention with ChatGPT, it was a consumer phenomenon. Now, with a $100 million partnership embedding its latest models into Databricks’ data platform, the company is making a decisive push into the enterprise world, where the real money, and the real risks, reside. For Databricks, the assumption is that customers will want not just large language models, but AI embedded directly into their data stack, with governance and cost optimization built in.
The deal guarantees OpenAI at least $100 million in revenue whether enterprises use the models or not, shifting much of the financial risk to Databricks. In return, Databricks customers (more than 20,000 across industries) will be able to access OpenAI’s latest frontier models, including GPT-5, natively within the company’s Data Intelligence Platform and its Agent Bricks product. That means a financial services client can call GPT-5 directly from a SQL query, or a healthcare company can build domain-specific AI agents without moving data outside its governed environment. It’s a promise of frictionless access: no separate API keys, no complex compliance reviews, just an extension of the tools enterprises already use.
For OpenAI, this is another step departing from its early distribution model. The company built its reputation on ChatGPT, a consumer-facing product that became a global phenomenon, and on APIs delivered through Microsoft Azure. Enterprises that wanted to experiment with GPT-4 or GPT-5 had to negotiate contracts, manage data-governance questions, and often build bespoke integrations. By embedding directly into a platform like Databricks, OpenAI moves closer to where enterprise data lives, expanding its footprint beyond consumer adoption and Azure’s walled garden. The Databricks deal is in fact the company’s first true integration partnership with a business-focused data product vendor, according to OpenAI COO Brad Lightcap.
This pivot toward enterprise has been underway for more than a year. In 2023, OpenAI introduced ChatGPT Enterprise, a version of its chatbot with hardened security, privacy guarantees, and higher-capacity features. Since then, it has released open-weight models such as gpt-oss-20B and gpt-oss-120B, designed to give companies more sovereignty over how and where they deploy AI. It has also embraced interoperability efforts like the Model Context Protocol, which makes it easier for enterprises to connect large language models to internal data sources and tools. OpenAI has even tailored offerings for governments, extending the enterprise playbook into the public sector with a focus on compliance and security.
Databricks, for its part, gains a marquee partner to complement its multi-model strategy. The company has already inked similar agreements with Anthropic and Google. Aside from being a conduit for OpenAI’s models, they’re adding their own technical innovation that could dramatically reduce costs.Recent research from Databricks showed that its “GEPA” approach to evolutionary prompt optimization can make models up to 90 times cheaper to operate, in some cases elevating smaller models to perform on par with premium offerings.
If Databricks can optimize prompts, manage governance, and reduce costs, then the models themselves risk becoming interchangeable commodities. The advantage lies not only in GPT-5’s reasoning prowess, but in Databricks’ ability to deliver that power securely, cheaply, and within the operational context of Fortune 500 data estates.
Contrast this with other enterprise AI plays and the distinctiveness of the Databricks-OpenAI deal becomes clearer. Microsoft, OpenAI’s closest ally, has embedded GPT models into Office products and Azure services, branding them as Copilot features. The reach is massive, but the integrations are largely at the application layer, not directly into enterprise data platforms. Anthropic, OpenAI’s main rival, has pursued partnerships with AWS and Oracle, seeking to position Claude as the safer, more reliable choice for business. Cohere has targeted vertical SaaS, embedding its models into Oracle’s ERP and NetSuite products. And open-source players like Mistral and Hugging Face are betting that some enterprises will prefer running models entirely on-premises, valuing control over performance.
What Databricks and OpenAI are betting on adjacency to data. Rather than bolt AI onto applications after the fact, they want to make it native to the infrastructure where companies already manage their most sensitive assets. It’s a powerful vision: AI not as an add-on, but as a layer in the data stack itself.
Still, risks abound. Enterprises are notoriously conservative about reliability and cost. Studies already suggest that many AI pilots have failed to deliver tangible business value. Hallucinations remain unsolved. And the lock-in of embedding proprietary models into data workflows may spark resistance, particularly as cheaper, open-weight alternatives improve. For Databricks, the financial risk is explicit: if customers don’t adopt OpenAI models at scale, the company still owes $100 million. For OpenAI, the danger is that If partners like Databricks optimize around cost and governance, the company’s flagship models could fade into the background, valuable but replaceable.