AIM Media House

What Are the Risks in AI Payments Governance?

What Are the Risks in AI Payments Governance?

"In a world where AI makes the decision, governance is what earns the right to make it."

Artificial intelligence is no longer just influencing decisions in payments. It is starting to make them. And according to i2c CEO Amir Wain, most institutions are not yet built to govern the difference.

In a piece published in a PYMNTS eBook on April 24, 2026, titled "AI Runs Payments. Governance Decides What Happens Next," Wain argues that the payments industry has been asking the wrong question.

The conversation about AI risk has centred on models including their accuracy, bias, and explainability. The actual risk, Wain contends, is not the model. It is the architecture the model operates in.

For years, payment institutions built their infrastructure by stitching together different systems to solve discrete problems across credit, debit, and core banking.

That fragmentation created gaps, between systems, between decisions, and between ownership. When AI is introduced into that environment, governance does not fail at the model level. It fails in the gaps between systems.

The Architecture Argument

Wain describes i2c's platform as built around a customer-centric rather than product-centric model on the premise that there will always be new products that cannot be predicted, so the system itself must be adaptable.

That choice, he acknowledges, was not the fastest path. It took longer upfront. But it created the consistency and control that now apply directly to AI governance, a set of principles that only hold when the underlying infrastructure is unified rather than fragmented.

The fraud environment makes the stakes concrete. Wain draws a deliberate distinction, eliminating fraud entirely is technically possible by declining every transaction and recording zero fraud. But that is not a strategy.

The real objective is minimising friction while maximising fraud capture, a balance that requires real-time intelligence and dynamic response to an environment that is shifting constantly.

Agentic AI and the Accountability Gap

The next phase Wain describes is agentic AI even as the shift from AI that assists human decisions to AI that makes decisions independently raises a question the industry has not yet answered consistently: if AI is acting, who is accountable? Wain's answer is that autonomy does not remove accountability.

Responsible AI governance requires transparency, consent, and traceability in how data is used, according to Amir. The human role in that framework becomes more strategic rather than less, overseeing AI to ensure decisions remain fair, explainable, and aligned with business outcomes rather than executing decisions directly.

The institutions that will lead the next phase of payments AI, Wain argues, are not the ones moving fastest.

They are the ones building the discipline and architecture to govern decisions that are now real time, automated, and consequential. "In a world where AI makes the decision," he writes, "governance is what earns the right to make it."

Key Takeaways

  • Recognize that AI's decision-making in payments requires robust governance frameworks.
  • Identify existing gaps in governance that hinder effective AI utilization in payment systems.
  • Shift focus from questioning AI models to addressing governance challenges in the payments industry.
  • Understand that effective governance earns the right to make AI-driven decisions.
  • Emphasize the need for institutions to adapt to AI's evolving role in payments.