Compliance programs are only as strong as their weakest pillars. And for CFOs and finance departments, artificial intelligence (AI) is emerging as one of the biggest opportunities, and most dangerous risks, of the 21st century.
Enterprise AI targeted at corporate back-office workflows doesn’t just learn from data; it redefines how decisions are made. That means accountability structures built for human oversight are being stress-tested in ways compliance leaders are still struggling to measure. The challenge isn’t merely technical. It’s structural. The same algorithms that can enhance efficiency or accuracy can also introduce opaque dependencies, unpredictable biases, and noncompliant cross-jurisdictional data flows.
Adopting AI doesn’t just change how finance operates. It can ultimately change what compliance means.
The “so what” for CFOs is that governance over data and algorithms is becoming as important as governance over dollars and disclosures. The CFO who treats AI as another IT tool could be missing the point. The CFO who treats AI as part of the control environment may be getting ahead of it.
New Frontier of Compliance Risk
Historically, the compliance function in finance organizations has operated within well-defined guardrails. Sarbanes-Oxley (SOX) controls govern financial reporting. The Securities and Exchange Commission (SEC) sets standards for disclosures. Cybersecurity frameworks like NIST or ISO manage data protection.
Each of these regimes shares a common premise: The entities being regulated, whether they are people, systems or processes, are known and their behaviors largely traceable.
AI breaks that assumption. The learning models embedded within forecasting tools or risk analytics engines evolve continuously based on new data inputs. Their internal reasoning, particularly in complex deep-learning models, may be statistically valid but logically inscrutable. For a CFO signing off on quarterly statements or audit attestations, this presents a fundamental problem: How to ensure accountability when the “actor” is an algorithm.
“You’re messing with … money here,” Trustly Chief Legal and Compliance Officer Kathryn McCall told PYMNTS in an interview posted this summer. “This is a lot different from using an AI agent to plan your vacation in Paris. … You’ve got to treat these AI agents as nonhuman actors with unique identities in your system. You need audit logs, human-readable reasoning and forensic replay.”
Traditional compliance frameworks are designed around the principle of control and the capability of defining, testing and documenting how decisions are made and validated. But in the world of AI, “control” morphs into “explainability,” the ability to articulate why a model made a given prediction or recommendation.
Finance functions have always depended on trustworthy data, but AI exponentially magnifies the scale and complexity of data dependencies. In practical terms, that can mean documenting not only what the model does but also what assumptions underpin its logic, what data it consumes, and how those inputs are validated over time.
Redefining the Role of CFO
The marketplace is not standing still as AI sweeps over the enterprise back office.
NContracts on Monday (Oct. 20) introduced a pair of AI-powered compliance and risk management solutions for financial institutions; while earlier in October, Anthropic and Deloitte announced a partnership to build AI solutions that include compliance features to enable their deployment in regulated industries like financial services, healthcare and life sciences, and public services.
Companies are asking not “Should we try this?” but “How will this improve cash flow, forecasting accuracy, or decision speed?” Emanuel Pleitez, head of finance at Finix, told PYMNTS in an interview posted Oct. 8.
“If you just start using AI today without needing to make the big five, 10% of your budget investment into it, you can actually extract and get five to up to 20% more productivity gains,” added Pleitez.
The latest PYMNTS Intelligence report, “From Experiment to Imperative: U.S. Product Leaders Bet on Gen AI,” captures this pivot well. Eighty-seven percent of product leaders now expect AI to improve fraud detection, 85% forecast better regulatory compliance, and 83% anticipate stronger data security.
As PYMNTS wrote earlier this year, financial industry executives believe that companies have little choice but to turn to AI to make their way through today’s increasingly complex regulatory landscape and faster product development cycles.
“In 2025, there is pretty much no compliance without AI, because compliance became exponentially harder,” said Alexander Statnikov, co-founder and CEO of Crosswise Risk Management. “Think about all the change management that happens with regulations. Now, states will be stepping in. How do you stay on top of it?”
Source: https://www.pymnts.com/