What It Really Takes to Move AI Beyond the Pilot Phase: Insights from MachineCon NY

Inside the operating models, data strategies, and change tactics that work

Despite a surge of optimism and experimentation in Gen AI, the majority of organizations remain stuck in proof-of-concept purgatory. A recent BCG study found that 74% of companies struggle to create real business value from AI, even as others report tens of millions of dollars in impact. The gap is about execution. The difference between aspiration and adoption lies in the operating model, the data foundation, and, above all, the discipline to scale deliberately.

This tension between rapid experimentation and enterprise-grade implementation was at the center of a recent panel at MachineCon New York, featuring Sudip Chakraborty, Principal and Head of AI and Gen AI at Axtria; John Tucker, Director of Enterprise Data at McDonald’s; Diego de Aragão, SVP of Balance Sheet Management & Analytics at Citi; and moderator Ozgur Dogan, President, Americas at Blend360.

The Cost of Getting Data Wrong

Without data quality, lineage, and observability in place, even the most promising models are destined to fail. But fixing data infrastructure is a business problem that demands business ownership. Embedding stewards from the start ensures that context and accountability travel with the data, improving both utility and adoption.

And yet, there is tension. While AI investments demand rapid payoff, foundational data initiatives often yield intangible or deferred ROI. This misalignment can deprioritize essential groundwork. The most effective organizations resist the pressure to cut corners, choosing instead to “go slow to go fast”, investing early in data governance to unlock sustainable value later.

Flexibility Over Perfection in Tech Architecture

From an architectural perspective, over-committing to specific tools or platforms too early can backfire. The AI landscape is evolving too fast for static tech stacks. A modular design enables organizations to pivot as models, vendors, and use cases shift.

Still, even modular systems must be production-ready. Quick-and-dirty pilots often falter in scale because they lack the load testing, compliance protocols, and security scaffolding required to survive in real-world environments. If you build for experimentation, you’ll likely stay in the lab.

ROI is a Trajectory

Return on investment should be seen less as a single metric and more as a trajectory. Adoption (who is using the solution, how frequently, and for what impact) offers the earliest signal of future value. Without adoption, value is hypothetical.

Organizations that bake in adoption metrics from day one, rather than after deployment, are far more likely to scale. Adoption planning isn’t just a communication task. It touches design, user experience, trust, and change management. In practice, it means co-creating with users rather than building in isolation.

Change Management Is Not Optional

AI projects succeed not when they are technically sound but when they are socially accepted. Change management must be designed in. That includes training, cross-functional workshops, and executive sponsorship.

Human-centered design leads to faster adoption and fewer downstream barriers. Technical credibility must be paired with empathy, especially when AI initiatives disrupt legacy workflows or long-standing teams. Senior sponsorship is essential for momentum and trust.

Centralized vs. Federated? Both

The right operating model remains a live debate. Purely centralized Centers of Excellence offer consistency but can slow down innovation at the edge. Fully federated models empower local teams but often lead to duplication, inconsistency, and compliance risk.

Hybrid models are emerging as the best path forward. A central team defines standards, guardrails, and shared tooling, while federated squads build domain-specific use cases. This structure enables both scale and specificity, provided there’s a strong feedback loop between central and local teams.

In high-performing organizations, four distinct pods often emerge: innovation teams that experiment with new technologies, industrialization teams that scale successful prototypes, reusable asset teams that maintain shared libraries, and operational teams that support production systems. When coordinated effectively, this structure drives a flywheel effect, reducing cost and time-to-value with each additional use case.

AI Literacy Is a Strategic Imperative

As technical complexity accelerates, widespread AI literacy across the enterprise is now a requirement. Upskilling is no longer a task reserved for IT. From legal to HR to finance, every function must understand how AI works, what it can (and can’t) do, and how to engage with it responsibly.

This cultural shift helps temper both over-enthusiasm and resistance. It also helps the organization prioritize strategically, curbing the tendency to chase every shiny object and instead focus on high-value, high-readiness use cases.

A Shared Understanding of What It Takes

Scaling AI is not a matter of finding the right model or tool. It’s about building the muscle to go from prototype to production in a repeatable, sustainable way. That requires flexible architecture, disciplined governance, early planning for adoption, and an operating model that balances control with speed.

Most importantly, it requires a cultural commitment to the long game: one in which early groundwork enables exponential returns down the line. AI doesn’t scale on code alone. It scales on trust, coordination, and execution.

📣 Want to advertise in AIM Media House? Book here >

Picture of Mukundan Sivaraj
Mukundan Sivaraj
Mukundan covers the AI startup ecosystem for AIM Media House. Reach out to him at mukundan.sivaraj@aimmediahouse.com.
25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.