Five witness told a House Financial Services subcommittee that AI is already changing the face of banking, improving back office operations and customer-facing interactions, but caution must govern use of the advanced technology in credit scoring and amid third-party vendor risks.
At the Thursday afternoon (Sept. 18) hearing before the subcommittee on Digital Assets, Financial Technology, and Artificial Intelligence titled “Unlocking the Next Generation of AI in the U.S. Financial System for Consumers, Businesses, and Competitiveness,” panelists took note of the growth in generative artificial intelligence (AI) and agentic AI within banking. Gains in efficiency and inclusion must be balanced against risks related to bias, security and compliance, they warned.
In his testimony, Daniel S. Gorfine, CEO of Gattaca Horizons LLC, noted that AI in financial services is “not new,” as the sector has utilized advanced analytics since the 1980s, focusing on investment data analytics, followed by fraud detection in the 1990s. Dr. Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, highlighted the scope of this transformation, stating that the financial sector’s spending on AI is expected to grow to $97 billion by 2027, up from $35 billion in 2023.
However, the recent rapid expansion of capabilities, including the development of large language models (LLMs) and agentic AI, necessitates careful oversight, Gorfine said.
Dr. Christian Lau, co-founder and president of Dynamo AI, emphasized that while financial institutions have identified hundreds of AI use cases ready for deployment, many struggle with governance. He noted that AI use cases often fail to make it into production because “financial institutions struggle to answer open questions about managing AI risk in heavily regulated environments.” To move forward, policymakers and regulators must clearly signal “both the innovation they want to encourage and the risks they consider most important to address,” Lau said.
Chatbots
One of the most immediate and visible applications of AI in banking is through automated customer service and internal support systems. Witnesses generally agreed that AI-powered chatbots enhance efficiency, but they also underscored the associated compliance and security risks.
Dr. David Cox, vice president for AI models at IBM Research, detailed how financial firms are successfully deploying these tools to improve customer relations, saying they are using “IBM-enabled chatbots and natural language systems to deliver faster, more personalized customer service.” This allows clients to interact using non-technical language, instantly resolving routine queries “while freeing human representatives for more complex cases.”
Lau added that internal operations also improve, as the bots help employees understand company policies and business line standards to better execute processes and engage with customers and colleagues.” Matthew Reisman of the Centre for Information Policy Leadership further observed that generative AI has boosted productivity across functions, from software development to customer service.
However, the rapid shift to automated customer interaction creates regulatory ambiguities, panelists said, and warned, too that scammers have been leveraging models to impersonate trusted individuals.
And while the financial services industry utilizes AI extensively for backend functions such as data analytics, fraud detection and internal risk management, these uses are complicated by inherent risks stemming from the technology’s opacity, its reliance on third-party vendors, and the emergence of autonomous systems, the experts said.
A key challenge for sophisticated AI is explainability. Lau emphasized that generative AI models differ from conventional statistical models because “it is impossible to determine exactly how or why a generative AI model responded the way it did,” contending that the reality of “explainability and lack of traceability in AI models may warrant a revision of existing risk management guidance, evaluation methods, and subsequent controls.” And Cox maintained that there are security concerns surrounding third-party vendors, stating that organizations worry about safeguarding proprietary and customer data, particularly when using LLMs delivered through cloud-based services.
Bias and Credit Access
A major focus of the hearing was the dual impact of AI on consumer credit: the promise of expanded financial inclusion versus the danger of algorithmic bias reinforcing historical inequities.
The Brookings Institution’s Lee argued that the financial sector’s reliance on existing datasets means that AI risks blunting credit access. The problem is exacerbated by the use of non-traditional data points, or proxies, which can stand in for other characteristics. She highlighted that nearly 50 million U.S. adults lack enough credit history to be scored by common models, and noted that significant portions of minority populations would be unscored or considered subprime.
Despite these clear dangers, the witnesses also presented artificial intelligence as a powerful tool for remediation and expansion of services. Lau said AI deployment, with proper risk mitigation, can lead to “broader access to credit and banking services in underserved communities.” Gorfine concurred, saying AI can offer decision-making “in the context of determining creditworthiness when a traditional credit score may preclude access.”
During the hearing, Rep. Stephen Lynch (D-Mass.) took note of the velocity of change in the development of AI, saying such rapid advancements has created challenges for lawmakers in terms of regulating AI while protecting markets and consumers. The sandbox model, he said, seen in Singapore and other countries, has “a principles based approach … that sets an ethical foundation for all AI projects” along with regular reviews and audits. He asked, “Is that the formula we should use?”
Lee responded: “I am a fan of sandboxes … in the United States we use sandboxes to cultivate a FinTech marketplace.” But the pace of change, she added, demands that there are principles that govern “accountability, continuous monitoring, transparency, and allows us to ensure that consumers are ‘baked into the process.’”
Source: https://www.pymnts.com/