Billtrust Says Responsible AI Starts With Data Governance

In the accelerating race to embed artificial intelligence (AI) across enterprise workflows, most organizations face what seems like an impossible paradox: innovate fast enough to stay competitive, yet cautiously enough to avoid regulatory, ethical and security landmines.

At Billtrust, Chief Information and Security Officer (CISO) Ankur Ahuja believes that paradox may be somewhat of an illusion.

“AI doesn’t get a special treatment,” Ahuja said. “It’s the same strong audited controls that protect all our financial data.”

“If you follow strong data security policy, it automatically replicates to strong AI security policy,” Ahuja added, noting that Billtrust has built a foundation where AI adoption feels like an extension of existing discipline, not a departure from it.

AI, it turns out, doesn’t have to be an exception to security rules. It can be an extension of them.

“Data is data,” Ahuja said.

Advertisement: Scroll to Continue

This principle helps cut through the noise surrounding AI-specific risk. Whether the technology involves retrieval-augmented generation, vector databases or synthetic data, the same rules can apply: secure data, control access and ensure compliance.

Redefining the CISO’s Mandate for the AI Era

In Ahuja’s view, security is a product feature. That mindset has become a competitive advantage for Billtrust, whose clients rely on it to manage sensitive financial transactions. As generative AI begins to automate portions of the AR process, the company’s greatest asset isn’t just its algorithms; it’s the trust embedded in how those algorithms handle data.

“Our approach to AI governance is basically enabling it, but with guardrails,” Ahuja explained. “We want our teams to innovate with AI, play around with it, but in a way that protects customer trust, data integrity and intellectual property.”

“Whether it’s specific to retrieval-augmented generation or vector stores or anonymization, everything has to be based on how you deal with your current data,” he added.

By treating all data equally, whether its financial, operational or AI-generated, teams can be better able to sidestep the complexity and confusion that often plague enterprise AI deployments.

“Any data that could touch payment systems follows the same PCI rules: segmented networks, encrypted data, continuous scanning,” Ahuja said. “Similarly, our [SOC 1 and SOC 2 compliance] adhere to the same border security program governing logical access, exchange management and data handling.”

For privacy laws like GDPR and CCPA, Billtrust ensures that customer data used in AI systems is governed by the same consent and deletion rights as any other data category.

“We ensure customer data is only used for the purpose it was collected, with clear consent and deletion rights,” Ahuja said.

The Playbook for Vendor Accountability

Still, internal governance is only half the equation, which is why Ahuja extends his standards to every external partner involved in Billtrust’s ecosystem.

“Do you use customer AR data in training foundational public models?” he suggested as the first question to ask. “Expect a clear ‘no’ answer. If they say yes, then there’s something you need to dig into.”

For Ahuja, transparency in AI processing is nonnegotiable. Vendors must prove that their systems protect inputs and outputs from leakage.

“These are the kind of questions every CFO should ask,” he said. “Make sure that their data is secured wherever it is, whichever vendor is taking care of it. … It’s a very simple principle. Trust but verify.”

Responsible AI doesn’t necessarily require reinventing governance. It requires extending what already works, which is a corporate principle of treating data stewardship not as a constraint, but as the foundation of innovation.

“We maintain backups and disaster recovery plans not only for our infrastructure, but also for the AI models and training datasets,” Ahuja said, emphasizing that consistency extends to resilience.

AI, in other words, is part of the enterprise continuity plan and not a separate silo.

Confidence Through Consistency

Even with strong governance, AI can introduce a unique set of operational risks, chief among them, hallucinations and model drift. Ahuja’s team has implemented a multitiered validation process to minimize both.

When the stakes are financial, such as in collections or credit-risk decisions, Billtrust maintains a strict “human-in-the-loop” policy.

“We don’t let machines make decisions when it comes to financial decisions,” Ahuja said. “Humans are involved.”

Ultimately, by demystifying AI risk, departments can work to transform compliance from an obligation into a shared value while empowering teams to innovate safely rather than restricting their creativity.

“It’s called awareness and enablement,” Ahuja said. “Make sure the entire company is aware of how we process AI data.”

Responsible AI, at the end of the day, may not require reinventing the wheel as much as respecting the systems, and the trust, that already work.

Source: https://www.pymnts.com/