AI and Embedded Humans Join Forces to Outwit Fraudsters

The fraud landscape is changing faster than ever — and increasingly, the bad actors are using artificial intelligence. As deepfakes, synthetic identities and automated social engineering grow more sophisticated, fraud fighters are turning to AI of their own to detect the anomalies hidden in billions of data points.

But as Matthew Pearce, vice president of Fraud Risk Management and Dispute Operations at i2c, puts it, “human oversight is not just a compliance checkbox for us — it’s actually a competitive advantage.”

Pearce told PYMNTS in an interview that while agentic AI can spot patterns and surface anomalies, there are still contextual errors that can happen within the models themselves.

That’s where trained analysts come in. “Embedded human-in-the-loop governance allows i2c to combine the speed and scale of machine intelligence with the judgment, nuance and ethical reasoning that an experienced analyst can bring to the table,” he said.

The company’s hybrid approach, he said, reduces false positive rates, shortens investigation cycles and “preserves a smoother customer experience, all while providing audit trails and explainability to our counterparts and regulators.”

That’s increasingly important as new regulations — including the latest AI and data-use laws in California — mandate explainability and transparency in how financial institutions deploy automated systems. “Regulators have been crystal clear,” Pearce said. “AI that materially affects customers must be explainable, auditable and accountable.”

He added that European and U.S. supervisory guidance now “emphasizes transparency, human oversight, and traceability for high-risk systems.”

Scaling AI as an Exercise in Leverage

For Pearce, scaling AI is less about replacing people and more about using them where they matter most. “Scaling oversight is an exercise in leverage,” he said. “We use AI to filter and prioritize, and then we apply human analysts to where we can add more marginal value instead of looking at the mundane stuff.”

“i2c’s model tiers risk by confidence and impact. Low-confidence or high-impact events are automatically escalated through the system to an analyst,” he said, while the rest are handled programmatically.

That frees up human specialists to focus on “high-value investigations rather than routine triage,” according to Pearce.

“Scaling oversight is an investment,” he said, “but if it’s engineered properly, it’s going to lower your cost per case and reduce your fraud losses over time.”

Reducing Friction Without Sacrificing Security

Reducing false positives is also a key performance metric. “False positives are the biggest customer experience killer,” Pearce said. “It’s annoying for the cardholder, it drives up support costs, and it increases churn. i2c’s hybrid system uses AI to score and contextualize events in real time, routing ambiguous or high-impact cases to analysts who can “humanize decisions.”

“For example, they can confirm if a velocity spike is a legitimate travel transaction or if it’s really a coordinated attack,” he said. “Because of the human reviewer, we can resolve borderline cases rapidly and genuine customers see fewer declines and faster restoration if we have blocked something.”

For financial institutions, he added, this translates to lower dispute volume, fewer unnecessary write-offs, reduced customer service load and higher customer retention.

In short, “it’s fewer customer complaints and better economics,” Pearce said — a rare combination in risk management.

Humans Evolve as AI Matures

As AI becomes more capable, Pearce believes the human role will shift toward strategy, oversight, and governance.

He expects three concrete changes. “First, more governance-centric roles — humans become model stewards and domain experts that validate, explain, and tune models over time. Second, there will be fewer repetitive triage tasks,” he said. “AI will handle the large volume of mundane items, and humans will act in the context of regulation demands and judgment. Third, stronger collaboration — humans and machines will co-design experiments, use synthetic data to stress models, and iterate faster.”

The result, he said, will be continual improvement loops rather than periodic refreshes.

“In short, the human role will become more strategic and supervisory — overseeing the AI to ensure it’s working at scale, doing what we anticipate, and ensuring decisions are fair, explainable, and aligned with business outcomes,” Pearce said.

Pearce predicted that the combination of human judgment and automated scale will become an industry standard. “We do expect this hybrid model to really become the de facto model for regulated institutions as we move forward,” he said. 

Source: https://www.pymnts.com/