Mastercard's New AI Doesn't Read. It Counts.

"We believe this same gen-AI technology won't just transform chat, it will transform commerce."
Almost every major AI investment in financial services over the past three years has followed the same playbook. Build a large language model, fine-tune it on internal documents, and deploy it as a copilot or chatbot. JPMorgan Chase, Goldman Sachs, Morgan Stanley, and even Mastercard have all done exactly this.
But transaction data is not language. It does not live in documents or emails. It lives in rows, columns, and patterns that LLMs were never designed to read. For that problem, Mastercard had made a different architectural bet entirely
The payments company has built a foundation model that works nothing like the LLMs powering ChatGPT, Claude, or the AI assistants being rolled out across Wall Street.
Where those models are trained on text, Mastercard's model is trained on transactions. Where LLMs predict the next word in a sentence, Mastercard's model predicts the next event in a payment sequence. The architecture, data and purpose are different.
The company calls it a Large Tabular Model (LTM). It is a deep learning neural network trained on structured data like rows, columns, relationships, and patterns rather than the unstructured inputs that define most of today's generative AI.
Mastercard has trained the current version on billions of anonymized card transactions, with plans to scale to hundreds of billions, adding merchant location data, fraud data, authorization data, chargeback data, and loyalty program data as the model matures.
Payments data does not look like language. It does not look like images or video. It is structured, transactional, and relational, the kind of data that LLMs are not built to exploit.
A December 2025 study published on arXiv found that LLMs perform poorly when applied directly to tabular fraud detection, citing difficulty reasoning over many features and the absence of contextual information. Mastercard's argument is that building an LLM on top of transaction data would be the wrong tool for the job.
"We believe this same gen-AI technology won't just transform chat, it will transform commerce," Steve Flinter, Distinguished Engineer at Mastercard, wrote in a company blog post. "It will make payments faster, retail experiences more personalized and cybersecurity tools more precise."
The model is not being built to talk to customers. It is being built to see things in data that humans and existing AI models cannot.
The Problem With Rules
Mastercard's existing cybersecurity AI models are built the traditional way. Data scientists start with raw transaction data, then enrich it with manually defined features, rules and signals that tell the model what to look for.
For example, a sudden spike in purchase frequency, transactions appearing in two different countries within minutes of each other, or unusual spending categories for a particular account.
These rules work and they catch a lot of fraud. But they also create false positives, which is a very persistent problem. The model flags legitimate transactions as suspicious because the rules do not have enough context to distinguish unusual-but-real from unusual-but-fraudulent.
The wedding ring problem is the clearest example. A high-value, low-frequency purchase like someone buying an engagement ring, a piece of jewelry, or an expensive watch, looks anomalous to a rule-based system. It does not match the account's spending history, so it triggers the fraud model.
The transaction gets declined or flagged and the merchant loses a sale. The fraud model did exactly what it was designed to do and still produced the wrong outcome.
Mastercard's LTM approaches the same data differently. Rather than starting with human-defined rules, the model analyzes raw transaction data with minimal human input and learns independently which characteristics of the data are meaningful.
It identifies connections that a data scientist might not think to encode as a rule. In the wedding ring case, the model can learn from weak signals in the data and distinguish a legitimate high-value purchase from a fraudulent one.
"In our testing, we've already seen this new model outperform standard industry machine learning techniques, giving us promising early signs," Flinter wrote. "Very expensive but very infrequent purchases, such as when someone buys a wedding ring, tend to trigger current models today and cause a lot of false positives. In our experiments, our foundation model can better identify these legitimate transactions, with the model able to learn from relatively weak signals in the data."
False positives in fraud detection damage customer trust, increase operational costs, and create friction at the exact moment a payment needs to work. A model that reduces false positives without increasing false negatives would be genuinely valuable at the scale Mastercard operates.
In this model, all personal data is removed from transactions before training begins. The model learns from behavioral patterns rather than from anything that could identify an individual.
Mastercard's position is that the volume of anonymized transactions is sufficient. With enough data, the model can infer what is normal and what is anomalous without knowing who the account holder is.
The LTM also carries a different risk profile from LLMs. It does not generate text, hallucinate, or produce outputs that need to be fact-checked. It produces predictions and classifications based on patterns in transactional data, a narrower, more auditable function that is better suited to regulatory oversight.
Cybersecurity is the first deployment, but Mastercard has framed the LTM as a platform for a broader set of applications. Loyalty and rewards programs run on exactly the kind of structured data the LTM is built to process like transaction histories, redemption patterns, merchant relationships, and customer segments.
Personalization models, portfolio optimization, and internal analytics are all areas where Mastercard sees the model playing a role. There is also an internal efficiency argument. Mastercard currently maintains thousands of AI models, each built and maintained separately for different markets, customers, and use cases.
The overhead of training, validating, and monitoring thousands of individual models is substantial. The LTM is designed to be flexible enough to serve as a single foundation that can be fine-tuned for different tasks, potentially replacing many of those separate models with one.
Mastercard is developing APIs and toolkits to give internal teams access to the foundation model, so they can build applications on top of it without building from scratch. That distribution strategy mirrors how the broader AI ecosystem has evolved, foundation models at the base, specialized applications built on top.
Mastercard is not alone in this architectural direction. Revolut, the digital banking company, built a transaction foundation model using a self-learning method called masked prediction, training it on its own payments data to improve fraud detection and predict customer purchases.
Using NVIDIA's AI stack, Revolut reported a 20% increase in fraud detection precision, better credit risk predictions, and a 9.6% improvement in cross-sell accuracy. The results are vendor-reported and not independently verified, but they signal that transaction-based tabular models are becoming a category, not just a single experiment.
The computing infrastructure for the LTM comes from NVIDIA, which provides the accelerated computing platform for processing large-scale structured data. Databricks handles data engineering and model development. Mastercard announced both partnerships at the NVIDIA GTC 2026 conference.
Training a model on hundreds of billions of transactions requires compute at a scale that most organizations do not have access to. NVIDIA's accelerated computing platform enables the processing speeds that make this kind of training feasible.
Mastercard has not disclosed specific performance metrics beyond the qualitative descriptions in its blog post. The claim that the model outperforms standard industry machine learning techniques in early testing is company-reported and not independently verified.
To note, performance claims remain limited to vendor reports and should not be treated as conclusive. Mastercard acknowledges that no single model will perform well in all scenarios, which is why the deployment strategy is hybrid. The LTM will work alongside existing fraud detection systems rather than replacing them.
What the LTM does establish is that there is more than one way to build AI for financial services. LLMs have dominated the conversation in banking for the past three years. Mastercard has built something that works on different data, for different problems, using a different architecture. Whether it performs better in production than existing models remains to be seen.