Clari5

Person Not Present: What AI Agents Mean for Your Fraud Detection Stack


Every fraud detection system in production today is built on one assumption: a human being initiates the transaction. That assumption no longer holds true with AI agents now able to initiate real payments for customers, at scale.

Mastercard launched Agent Pay in partnership with Microsoft, Stripe, and Google. Google’s Agent Payments Protocol has over 60 institutional backers, including American Express, PayPal, and Coinbase. India launched the world’s first national pilot integrating UPI directly into ChatGPT. Without a customer touching their screen, their AI agent can pay for groceries, book a flight, compare insurance premiums, and renew a policy.

This is payments infrastructure being laid across the world’s largest economies. And it changes the fraud equation in ways the industry has not fully absorbed.

 

From card-not-present to person-not-present

For two decades, card-not-present (CNP) defined the dominant fraud category. The physical card was absent, but a human was still on the other end, confirming intent, entering credentials, completing the action.

Javelin Strategy & Research describes what is now replacing it: person-not-present. The human is no longer at the point of transaction at all. The AI agent is the front-line actor. The authorization is indirect, and the initiating logic is not human-readable.

The distinction matters because CNP was an evolution within an existing trust model. Person-not-present breaks the trust model itself. Every verification method, every risk score, and every fraud rule in the stack was designed to answer one question: “Is this the right person?” When no person is present, that question has no answer.

And this is not a distant scenario. Datos Insights projects that 82 percent of midsize to large financial institutions will deploy GenAI into banking and payments workflows by the end of 2026. The infrastructure for AI-initiated transactions is scaling. The framework for monitoring them does not yet exist.

 

Why the current detection stack will not catch it

Rules-based systems evaluate transactions against static thresholds: Is this amount unusual? Is this merchant new? Is this device recognized? These questions assume a human made a decision to transact. When an AI agent initiates a purchase within its pre-set parameters, the transaction will look normal. It may pass velocity checks, come from a recognized device, and still be fraudulent.

Authentication will not catch it as there is no person present to authenticate. Bot detection will not catch it since the agent is legitimate and was invited in. The challenge is no longer separating bots from humans. It is separating legitimate agents from compromised ones that behave identically.

Consider what is no longer an edge case: a compromised agent redirected to transact with a fraudulent merchant. An agent exploited through prompt injection, executing transactions the customer never intended. An agent operating outside its authorized scope, where liability frameworks have not been written yet. Gartner projects that over 50 percent of successful attacks against AI agents will exploit access control issues using prompt injection through 2029.

 

The behavioral intelligence question

When the transaction initiator is not human, point-of-authentication controls lose their explanatory power. The question “Is this the right person?” becomes irrelevant. What replaces it is a different question entirely: “Does this action reflect the customer’s intent?”

That is a behavioral intelligence question. Answering it requires detection infrastructure built to evaluate patterns, sequences, and intent across the full session, not just the authentication moment.

This means analyzing how a session unfolds over time: Whether

  1. the sequence of events leading to a payment reflects genuine intent or manipulation
  2. an agent-initiated transaction aligns with the customer’s historical patterns of engagement
  3. the agent itself is operating within the boundaries the customer established.

Financial institutions that have invested in cross-channel behavioral monitoring are structurally better positioned for this shift than those relying on authentication-centric or rules-based models. The detection paradigm does not need to determine whether the actor is human or machine. It needs to determine whether the action is consistent with the customer’s intent. That distinction is what separates institutions that will detect agent-driven fraud from those that will discover it after settlement.

 

What this looks like in practice

Consider a retail banking customer who has used their account in a consistent pattern for three years: salary credits on the 28th, rent and utilities in the first week of the month, grocery purchases from two or three familiar merchants, and occasional travel bookings planned days in advance with browsing activity preceding the purchase.

The customer enables an AI shopping assistant. The agent is legitimate, authorized, and operating from a recognized device. Within its first week of activity, it initiates a high-value electronics purchase from a merchant the customer has never transacted with, in a product category with no historical precedent, and at a time that is inconsistent with the customer’s established transaction patterns.

A rules engine will see a valid payment within approved limits. Authentication is not triggered because the agent is operating under existing credentials. Bot detection is not triggered because the agent is not a bot. It is an authorized tool. But behavioral intelligence will see something different.

Behavioral intelligence will see a transaction that breaks the customer’s established pattern of engagement across multiple dimensions simultaneously: merchant category, transaction value, timing, and the absence of the browsing-then-purchasing sequence that has preceded every similar transaction in the customer’s history. That cluster of deviations, evaluated together and in real time, will generate the risk signal.

The agent may have been compromised through prompt injection. It may have been redirected to a fraudulent merchant optimized to look legitimate to automated tools. Or it may simply be the customer trying something new. Behavioral intelligence does not need to know the cause to flag the anomaly. It needs to surface the deviation from intent so the institution can act before settlement, not after.

 

The liability gap no one has closed

When a customer taps ‘Buy’, liability frameworks are well-established. This is not true when the customer’s AI agent does it.

OpenAI’s developer documentation places payment liability on merchants and their payment service providers. Google’s A2P Protocol introduces cryptographically signed mandates to create audit trails. Visa is tokenizing agent credentials with spending controls. Everyone is drawing lines, but no one knows where the boundaries will settle.

Javelin’s payments analysts put it plainly: the players involved are keenly aware of what is at stake, but the liability questions remain open. For financial institutions, that uncertainty is not a reason to wait. It is the reason to ensure that the detection infrastructure can distinguish between a legitimate agent acting on verified intent and an agent that has been manipulated, before the settlement window closes.

 

The question that matters now

When the AI layer your institution deployed to improve customer experience becomes the vector through which fraud enters your system, what in your current detection stack will catch it?

Gartner projects that 25 percent of enterprise breaches will trace to AI agent abuse by 2028. The institutions answering that question now will not be reacting to the next wave of agent-driven fraud; they will have already built for it.