In the first phase of generative AI adoption, most large companies rented intelligence through an API. They connected to models from OpenAI, Google LLC, or Anthropic, ran them on third-party infrastructure, and paid per token or prompt. The setup was quick to deploy but carried growing limitations.
Copyright lawsuits against OpenAI and Stability AI forced companies to question how model training data had been sourced. Compliance teams raised concerns about whether confidential information sent through APIs could be stored or reused. Marketing departments discovered that public models struggled to maintain brand tone or visual consistency across languages and channels. Costs also escalated as experimentation turned into large-scale production.
These pressures have pushed enterprises to take more control. Instead of renting generic models, many now aim to build or co-develop private ones trained on their own data. Adobe Inc. formalised that shift this month with the launch of Adobe AI Foundry, a service that lets companies build custom generative-AI models based on the Firefly family.
Firefly was introduced in 2023 and trained entirely on licensed, rights-cleared data from Adobe Stock and other approved sources. That training approach was meant to give enterprises a safe foundation for commercial use. According to Adobe, organisations have already created more than 25 billion assets with Firefly tools.
Adobe describes Foundry as a partnership rather than a product. “We’re surgically reopening the Firefly-based models, bringing in the IP from the enterprise, retraining them, and weighting for brand identity,” said Hannah Elsakr, Adobe’s vice president of generative-AI business ventures, in an interview with TechCrunch. The process, which Adobe calls deep tuning, retrains core model weights on company assets (imagery, video, and brand guidelines) instead of making small output-layer adjustments.
The result is a model fluent in a client’s visual language. A retailer can generate imagery that matches its catalogue, a film studio can reproduce its character style, and a brand can adapt campaigns across markets while preserving a consistent look. Early partners include Walt Disney Imagineering and The Home Depot, whose chief marketing officer Molly Battin said Adobe’s service “helps us scale content generation across digital platforms and streamline creative workflows”.
The Rise of the Custom-Model Economy
Adobe’s Foundry is part of a wider movement toward enterprise-specific models. NVIDIA Corporation now offers its own AI Foundry, a platform and consulting service for creating custom generative models using its NeMo framework and DGX Cloud infrastructure. These can be retrained on private data and deployed through NVIDIA NIM microservices. “Enterprises need custom models to perform specialised skills trained on the proprietary DNA of their company — their data,” said Jensen Huang, NVIDIA’s chief executive, in a company press statement.
In media and entertainment, Runway AI built an exclusive video-generation model for Lionsgate, trained on the studio’s film and television catalogue of more than 20,000 titles. The Verge reported that the model is accessible only to Lionsgate, giving the company its own private creative engine. Stability AI has pursued a similar route, offering enterprise workflows for creative and advertising groups to build brand-specific models in controlled environments.
The cloud platforms have adapted as well. AWS Bedrock, Google Vertex AI, and Microsoft Azure AI Studio now let customers fine-tune or retrain foundation models within private or region-locked networks, keeping corporate data separate from public training sets. Each provider cites compliance and data sovereignty as key drivers.
Market data supports the scale of the shift. Gartner Inc. predicts that by 2027, organisations will use small, task-specific AI models at least three times more than general-purpose large-language models. IDC forecasts that global enterprise spending on AI systems will surpass $500 billion by 2027, with private-model development among the fastest-growing categories (IDC report). McKinsey & Co. reports that more than 30 percent of enterprises plan to retrain or fine-tune base models on proprietary data within the next year.
Regulation reinforces the trend. The EU AI Act, which entered into force in 2024, requires disclosure of training-data sources, bias testing, and risk controls for high-risk systems. California’s CPRA amendments expand automated-decision transparency rules. Both increase the incentive to train models on internal, rights-cleared data.
Owning a model changes both finance and governance. API access remains an operating expense that scales with usage. A custom model, once trained, becomes a long-term investment that can be refined over time. Analysts compare the shift to the enterprise migration from public to private cloud a decade ago, driven first by compliance, then by control.
Governance frameworks are evolving with it. Adobe integrates its Content Authenticity Initiative and Content Credentials systems into Firefly and Foundry outputs to ensure provenance tracking. NVIDIA’s NeMo Guardrails provide bias and safety testing. AWS and Microsoft have added responsible-AI dashboards to monitor enterprise deployments. These features make model ownership auditable, which is essential for regulated industries and large consumer brands.
Control, transparency and brand integrity now rank higher than novelty. “They needed models that understood all their products, all their brands, their creative direction,” Elsakr told Fast Company.
Enterprises that once depended on shared APIs are redesigning how AI fits into their infrastructure. The result is a market defined by ownership, where the most valuable models are the ones trained on data no one else can access.