Pavlé Sabic, senior director of generative AI solutions and strategy at Moody’s, believes the future of AI in regulated industries is not about replacing humans but augmenting them with agentic AI. “Agentic AI isn’t just automating tasks, it’s combining automation with human oversight to deliver faster, more consistent outputs while preserving critical judgement,” he said.
“Financial institutions today face manual inefficiencies, fragmented data, heavy and evolving regulatory burdens, analyst overload, and outdated systems,” Sabic explained. These obstacles slow down work, increase risk, and make maintaining accuracy and compliance difficult. “That’s why intelligent automation and agentic AI matter. They not only speed processes but improve decision-making quality at scale.”
One example he cited is the inefficiency and risk in governance, auditing, and compliance checks, traditionally repetitive, manual, and only partially automated. Leveraging large language models (LLMs) together with retrieval-augmented generation (RAG), Moody’s creates AI systems that are probabilistic but tuned for consistency, enabling full auditability throughout workflows.
Augmenting Humans, Not Replacing Them
Sabic emphasized that this AI isn’t about replacing analysts or decision-makers but about augmenting their capabilities. “Agentic AI helps perform the busy work of the interns, tasks that can be automated with straightforward workflows, freeing humans for more strategic activities.”
For instance, Moody’s agentic solutions help banks analyze thousands of credit origination documents, synthesizing industry trends and firm-specific data rapidly, reducing processing times by about 60%. This human-led automation approach ensures outputs remain reliable and decision-ready. “Humans remain in the loop, reviewing AI-generated credit memos, making judgement calls, and exercising oversight,” said Sabic.
A Single Orchestrated Workflow
Unlike standalone LLMs or simple automation, agentic AI systems coordinate multiple AI agents and tools across different platforms to complete complex, multi-step workflows. “They’re not confined to a single prompt or platform,” Sabic explained. “You can tell the agent to pull together ten different sections of a credit report, format them exactly, include charts and industry-specific analysis, and produce a branded document ready for review.”
This capability relies heavily on protocols like the Model Context Protocol (MCP), enabling agents to call on different APIs and integrate diverse data sources seamlessly. “The power comes from orchestration, bringing many specialized AI components together coherently.”
A key differentiator for Moody’s AI is the use of its vast proprietary datasets, decades of credit research, risk analytics, sector data, and real-time news that provide rich, trustworthy context. “It’s not magic, it’s domain expertise encoded in data,” Sabic said. By anchoring AI workflows to this vetted data, Moody’s helps its clients meet strict audit and regulatory requirements.
This contrasts with more general AI tools that rely on broad internet data, often yielding inconsistent or unverifiable outputs. “Regulated industries demand that every decision can be traced, validated, and is consistent each time it’s produced.”
Overcoming the Implementation Hurdle
Despite the clear value, adoption of agentic AI poses challenges. “Implementation isn’t just about deploying technology,” Sabic said. “It involves redesigning workflows, training the workforce, and strategically aligning AI with core business priorities.” Many organizations are moving agentic AI from experimental innovation budgets into routine business functions, increasing stakes for successful integration.
Sabic also highlighted the “switching cost” challenge. Constantly changing LLM backends or AI providers can disrupt workflows and increase risk. Moody’s addresses this by offering agentic solutions that insulate clients from such turbulence, providing stability and continuity.
A frequent concern in regulated sectors is liability and supervision. Sabic stressed that agentic AI functions as a powerful assistant, not an autonomous decision-maker. “We design agentic systems to require human validation and supervision,” he said. Analysts become supervisors who oversee digital FTEs (full-time equivalents), focusing on reviews rather than routine data gathering or formatting.