Building AI’s Trust Infrastructure Requires More Than Algorithms

Artificial intelligence (AI) has quickly become the backbone of B2B payments, but it’s part of a larger set of tools needed to build the “trust infrastructure” for transactions in the age of digital fund flows.

As Alex Yen, vice president of product architecture at Persona, told PYMNTS, “Trust infrastructure is about shifting from this concept of a one-time verification … to continuous monitoring, continuous reverification … and maintaining trust over time.” That, he added, requires technology that can adapt as fast as the fraudsters using it.

When Fraud Thinks for Itself

Fraudsters have learned to leverage generative AI as efficiently as the businesses trying to fight them. “Some of these techniques aren’t necessarily new,” Yen said. “What’s different is that they’ve never been easier to reproduce and create than before.” He noted that tools capable of generating convincing fake documents, faces, or voices are “ever more accessible,” making detection exponentially harder.

Yen cautioned, “some of the fraud we’re seeing is not even detectable by the human eye. We’ve had cases where our systems detect it as fraud, but a manual reviewer might not.” Persona responds by training its own machine-learning models to spot the artifacts or features that generative AI fraud attempts leave behind, Yen said.

Why AI Alone Isn’t Enough

Those challenges illustrate AI’s paradox: powerful, yet insufficient on its own. “There is no single silver bullet for identifying fraud,” Yen said. “It’s really an iterative process, and this multi-factor process, that you need to take.”

That multi-factor approach includes what Persona calls multi-signal analysis — correlating data from identity, device behavior, network patterns and metadata to detect inconsistencies invisible in any one dimension. The company also builds feedback loops and a layered defense rather than “one monolithic model,” since models drift as behaviors change. “Fraud behaviors change,” Yen said. “And unless that model is able to adapt … at some point it will require human intervention to retrain or update it,” he told PYMNTS.

The Human Layer of Trust

For Yen, “AI-augmented human review” is the cornerstone of reliability.

That judgment becomes critical in know-your-customer (KYC) and know-your-business (KYB) workflows, where AI can flag anomalies but people make the final calls. “A model can identify risky entities,” he said, “but we wouldn’t necessarily say it can decide whether that KYC passes or fails.”

The combination, rather than substitution, produces a more accurate result and a better customer experience.

Real-World Applications: Square and Branch

Yen pointed to real-world use with partners like Square and Branch. When a business seeks to onboard with those platforms, Persona orchestrates KYB and KYC checks that merge AI efficiency with human review.

At the document level, artificial intelligence runs checks and produces a summarization of the documents for a manual reviewer who can then make a more informed and efficient decision.  Machine-learning models screen for fake IDs or misrepresented identities, while the system organizes results for quick human confirmation. The outcome, Yen said, is “faster review, fewer false positives, reduced manual load … and an overall process that’s both safer and more efficient.”

Humans step in both before and after AI analysis. In advance, automated detection filters obvious fraud. Afterward, reviewers validate flagged results and confirm legitimate submissions. “If it can be provided in a very summarized and efficient way … I can move very quickly and have those final checks,” Yen said. That dual structure preserves accuracy while speeding onboarding.

Continuous Trust, Not One-Time Checks

AI also strengthens long-term monitoring — the “continuous reverification” Yen described as a key goal. Fraud rarely happens just once: “If someone is successfully able to do so one time, they’re going to try and do so multiple times. That means they are conducting fraud at scale.” Persona’s systems link patterns across accounts to identify coordinated rings rather than isolated incidents.

At the same time, AI enables adaptive friction — making it easy for good actors and progressively harder for bad ones. “You could assume everyone has bad intentions … but that’s going to come at the cost of conversion,” Yen said. By dynamically segmenting users and applying context-based step-ups only when risk signals arise, trust infrastructure can “make the experience better for good actors, as well as make it more difficult for bad actors.”  That, he emphasized, is the essence of trust infrastructure in an AI-driven world — layered, dynamic and human at its core.

Source: https://www.pymnts.com/