Making Agentic AI Real in the Enterprise

The success of agentic AI depends as much on people as it does on technology.

By 2025, many large organizations are still grappling with the realities of AI adoption. Experiments are plenty, but the number of initiatives that have scaled into core business operations remains small. Conversations often circle around potential use cases, with fewer examples of tangible, enterprise-wide transformation. At MachineCon New York 2025, Bhaskar Kalita, Global Head of Financial Services and Insurance at Quantiphi, cut through the noise with a practical roadmap for embedding agentic AI into complex enterprise environments. His focus was on what it takes to move from trial projects to embedded capability anchored in governance, measured execution, and deliberate adoption.

Kalita opened by revisiting the milestones in AI’s evolution. The earliest systems were narrow in purpose, single-task programs hard-coded to follow strict “if-then-else” logic. They could execute a function flawlessly within their narrow lane but were blind to anything outside it. Then came multi-task AI, which allowed a single system to perform a small cluster of related functions. Early optical character recognition was a classic example: scanning an image, recognizing text, and deciphering fonts. It may seem basic now, but at the time it represented a shift toward more adaptable, generalizable intelligence.

The arrival of deep learning accelerated this progression. By training models on massive datasets, AI moved from rigid rules to adaptive pattern recognition. It could now interpret complex, variable inputs in real time, unlocking applications like Tesla’s driver-assist features—systems capable of recognizing lanes, detecting objects, and making rapid, context-aware decisions.

The most transformative leap, however, came in late 2022 with the rise of large language models. For the first time, AI could generate and understand human-like text at scale, opening the door to a broad range of tasks. These models began as horizontal, general-purpose engines, but over the past two years they have been fine-tuned for specific industries, infused with domain expertise, and embedded into targeted workflows. This shift is what enables AI to progress from being a background assistant to becoming a true operational partner.

Running alongside this technological history is the framework of AI autonomy. At Level 0, robotic process automation executes repetitive, rule-bound tasks with precision but no awareness. Level 1 introduces adaptive copilots that adjust in response to human input. Level 2, partial autonomy, sees AI string together multiple linked tasks without constant supervision. Level 3, conditional autonomy, allows the system to manage entire scenarios within pre-defined boundaries—similar to a car’s traffic-assist mode, which can take control in congested conditions without driver intervention. In practice, these levels blur into a spectrum rather than discrete steps.

For digital-native companies, advancing through this spectrum is often a smoother climb. With fewer legacy constraints, cleaner data foundations, and lightweight operations, they can integrate new AI capabilities quickly, experiment at low cost, and iterate rapidly. Large, established enterprises face a different equation. Their scale is built on decades of systems and processes, which also means data silos, inconsistent formats, and legacy infrastructure that was never designed for AI-driven autonomy. Even with newer interoperability tools such as the Model Context Protocol and agent-to-agent communication standards, integration in such settings demands deep engineering, rigorous planning, and cross-functional coordination.

This is where governance becomes the decisive first step. Kalita stressed that governance is not a final sign-off at the end of a project but the backbone from day one. At Quantiphi, every AI initiative undergoes an eight-step governance review covering security, privacy, compliance, and operational readiness. This process runs in parallel with development, ensuring that systems are not just functional but safe, compliant, and auditable before they ever go live.

The second anchor is measurement. Scaling AI without precise visibility into its value can turn investment into waste. Every cost whether it’s infrastructure, monitoring, or human oversight needs to be tracked down to the task level, allowing ROI to be linked directly to measurable performance gains. Decisions to expand deployment should be grounded in evidence, not optimism.

With these principles in place, the operational playbook takes shape. Start with a central, agent-ready platform that can serve as the foundation for all business units, producing reusable components that can be adapted across functions. Introduce autonomy in stages so that systems and teams can adapt together. Measure ROI at the smallest executable level before aggregating results for leadership review. And above all, design AI to work alongside people—amplifying human capability rather than replacing it.

Two examples illustrated these ideas in action. In wealth management, an AI agent was deployed to handle the full advisory process: aggregating client data from multiple sources, enriching it with analytics, generating personalized recommendations, executing investment decisions, and maintaining ongoing client engagement. The most dramatic efficiency gains came in the recommendation and execution stages, but improvements were seen across the workflow, creating a smoother overall client experience.

The second example came from within Quantiphi itself. Codera, its AI-powered software development accelerator, operates like a human-guided “pair programmer.” It can plan, reason, choose or build tools, and collaborate with developers throughout the project lifecycle. Rolled out across more than 100 development activities, Codera has delivered average time savings of 60 to 70 percent, with some tasks seeing reductions of up to 85 percent. For global engineering teams, those percentages translate into millions of hours freed for higher-value work.

The success of agentic AI depends as much on people as it does on technology. Without workforce readiness, targeted training, and seamless integration into daily workflows, even the most advanced systems will fail to gain traction.

The architecture that supports this transformation ties everything together. Governance and orchestration functions establish the rules and monitor compliance. A strong data engine ensures inputs are accurate, well-structured, and timely. Integration frameworks link AI capabilities to core enterprise systems. Observability tools track performance, detect drift, and make real-time corrections. This combination allows enterprises to scale AI responsibly, without sacrificing control or reliability.

Kalita closed on the same principle he began with: start governance early, pursue every other priority in parallel, and make governance the long pole that holds up the tent. For enterprises ready to match technical execution with organizational discipline, agentic AI can become a living, evolving part of everyday operations.

📣 Want to advertise in AIM Media House? Book here >

Picture of Mansi Mistri
Mansi Mistri
Mansi Mistri is a Content Writer who enjoys breaking down complex topics into simple, readable stories. She is curious about how ideas move through people, platforms, and everyday conversations. You can reach out to her at mansi.mistri@aimmediahouse.com.
25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.