AI’s Real ROI Test: Earning Trust

Every major shift in technology reshapes the foundations of trust. The rise of autonomous artificial intelligence (AI) systems is doing so at unprecedented scale, transferring decision-making from people to algorithms. Trust has become a form of infrastructure in the AI economy, determining how confidently markets and institutions allow these systems to operate.

Trust Gap Slows Deployment

global study by KPMG and Melbourne Business School found that 66% of people use AI weekly and 83% believe it provides benefits, but only 46% say they trust it. The Stanford HAI AI Index 2025 showed a similar sentiment. While most people agree AI will transform how society functions, fewer than half are confident that transformation will be positive.

In finance and healthcare, where algorithmic decisions affect credit, capital and compliance, low trust has become a measurable business constraint. Regulators are responding accordingly. The U.S. Government Accountability Office reported in 2025 that regulators are prioritizing transparency, documentation and oversight in AI deployments. In Europe, the EU Artificial Intelligence Act requires providers of high-risk AI systems to prepare detailed technical documentation, including model design, risk management and data provenance, before such systems can be placed on the market.

At the same time, enterprise investment continues to expand. PYMNTS reports that global AI spending could surpass $2.8 trillion through 2029, driven by automation across finance, logistics and data infrastructure. Yet as the World Economic Forum warned, “AI can only scale at the speed of public confidence.”

A related PYMNTS feature on payments innovation notes that leading financial executives already treat trust as the new currency in real-time payments. Their perspective reinforces that confidence is not a soft metric but a competitive advantage that determines which systems consumers and businesses choose to rely on.

Governance Defines the Value of Autonomy

AI systems are advancing into what the World Economic Forum calls the “agent economy,” where digital agents interact, negotiate and make decisions on behalf of people and organizations. That autonomy drives efficiency but also expands exposure to bias, misuse and cyber risk.

CIO analysis calls governance the “blueprint for trust,” arguing that oversight must be built into AI design through documentation, auditability and human review. That same tension is visible across the private sector. A PYMNTS report on Discover Financial Services highlights how even early adopters of AI are urging caution, noting that trust and governance must develop as quickly as innovation itself.

According to KPMG’s 2025 board-readiness survey, more than half of Fortune 500 companies now maintain formal AI governance committees, an increase from prior years, as boards seek to align AI performance with both regulatory and ethical expectations.

Trust Is Competitive Advantage

The World Economic Forum describes trust as the “new currency” of the AI economy. In practice, it determines whether innovation translates into adoption. A Wall Street Journal report found that consumers are far more likely to engage with AI-powered platforms when data use is transparent and opt-out controls are clearly stated.

Karen Webster, CEO of PYMNTS, in her analysis extends that logic to the data economy, arguing that trust is now the only true currency of information exchange. The more transparent and auditable a company’s data practices are, the more resilient its business model becomes.

The KPMG global trust study found that 70% of respondents worldwide support stronger AI regulation to ensure accountability. For investors and boards, that sentiment has direct implications. Explainability, auditability and oversight are now part of enterprise valuation. Systems that cannot be verified are treated as compliance risks rather than innovation assets.

Source: https://www.pymnts.com/