One AI question continues to divide enterprise leaders: is it faster, and more effective, to build in-house or to buy off-the-shelf? While the promise of proprietary platforms offers the lure of long-term differentiation, vendor solutions often boast speed and a lower up-front burden. The real challenge, as leaders made clear in a recent executive discussion from MachineCon New York, isn’t choosing one, rather, it’s learning how to orchestrate both.
The conversation, moderated by Arvind Balasundaram, Executive Director of Commercial Insights & Analytics at Regeneron, brought together senior leaders who’ve grappled with this question firsthand: Geeta Pyne, Senior Managing Director and Chief Architect at TIAA; and Ferris Zhang, Global Product Lead for Business Experimentation and Optimization at Mastercard.
What Speed to Insight Really Demands
While “speed to insight” remains a coveted metric in AI adoption, the consensus was that it’s often misunderstood. The rush to results can obscure the more fundamental need for problem clarity and iterative decision-making. Rather than treating speed as a race to code, leaders argued for a more principled process: one that includes fast, low-risk experimentation, clear “go/no-go” filters, and tight alignment with business impact from the outset.
This is an “express lane” mindset that not every project should advance at the same pace, and not every idea is worth three months of investment. By creating predefined lanes for experimentation, teams can quickly assess which use cases merit deeper commitment, before wasting cycles on misaligned or premature builds.
Total Cost Is More Than Just Budget
When it comes to evaluating build-versus-buy decisions, cost isn’t just about the initial spend. The emphasis is on the importance of viewing total cost of ownership through a multi-dimensional lens: one that includes runtime, ongoing maintenance, platform extensibility, and architecture compatibility.
That lens becomes even more important with generative and agentic AI, where the lifecycle of any given technology can be short. While some teams may be tempted by the allure of custom builds, there was a strong consensus that unless the use case is deeply core to the business, the long-term overhead, especially for maintenance and upgrades, can quickly erode any perceived advantage.
What stood out was a shared belief in principle-driven decision-making: organizations need clear criteria for what constitutes a differentiator worth building. Everything else should be scrutinized for potential partnerships or outright acquisition.

Lock-In, Talent, and the Flexibility Dilemma
The issue of vendor lock-in is a calculated risk. In times of rapid technological change, making short-term bets with long-term flexibility in mind is essential. This means designing systems with abstraction layers, so that swapping out a foundational model or cloud component down the line doesn’t require a full rebuild.
Talent is another pressure point. As AI talent remains scarce and expensive, organizations are forced to be selective about what they build. They stressed the importance of aligning internal engineering efforts with high-leverage, high-differentiation work, reserving scarce talent for what truly matters.
Interestingly, skepticism was expressed toward the romanticism of in-house builds. Even experienced engineering teams can underestimate the hidden costs of scaling, extending, and governing custom platforms. The message is clear: build only what will set you apart, and don’t conflate capability with strategic value.
Composability and the Power of “Buy and Extend”
Rather than viewing build-versus-buy as a binary, the panel advocated for more hybrid, composable strategies. In many cases, organizations start with a vendor solution and extend it selectively, adding layers of customization, integrations, or proprietary logic.
This “buy and extend” model offers a practical way to manage feature velocity and technical debt. Off-the-shelf platforms can accelerate delivery, while extension points allow for business-specific enhancements without owning the full tech stack. The goal isn’t just to get something working: it’s to ensure that what’s working can evolve with the business.
This is particularly critical for legacy-heavy sectors, where digital transformation must contend with decades of existing infrastructure. This was likened to modernizing thousands of repositories simultaneously: a task that demands more than good engineering; it demands orchestration at scale.
Governance and Guardrails for AI
Another theme that emerged was risk management, especially as AI capabilities intersect with sensitive data, compliance requirements, and intellectual property concerns. The conversation underscored the need for observability, codified policies, and clear design patterns to prevent unintentional leakage or misuse.
Effective governance, they argued, starts not with policing but with scaffolding. By codifying approved use cases, enforcing data classification standards, and baking observability into the stack, organizations can scale AI responsibly, without bottlenecking innovation.
One approach discussed was building a capability map to prioritize use cases, narrowing the funnel from opportunity to investment by layering in filters around differentiation, maturity, and technical readiness.
The Future Is Orchestrated
In closing, the discussion returned to the idea that no single decision: build or buy, will define success. What will? The ability to adapt. With new models, modalities, and market entrants emerging monthly, the organizations that succeed will be those that can orchestrate flexibly across ecosystems, integrate diverse capabilities, and continuously reprioritize around business impact.
AI innovation, in this view, is less about technological supremacy and more about disciplined execution.