Before you read this story, let’s take a little quiz to test your knowledge of financial crimes. One rule before you do: No AI allowed.
A customer opens an account and provides information that is inconsistent with their stated occupation. What should the financial institution do?
(a) Ignore the discrepancy as it’s common for people to have multiple sources of income.
(b) Immediately file a suspicious activity report (SAR).
(c) Conduct further due diligence to verify the customer’s information.
(d) Close the account immediately.
If you chose “c” go to the head of the class. Or you could apply to join the engineering team at Castellum.AI. Because when it pressed submit on the Certified Anti‑Money Laundering Specialist (CAMS) practice exam this spring, the test‑taker was not a junior analyst nervously awaiting a score. It was the firm’s flagship artificial intelligence (AI) agent. It passed on the first try, a feat normally reserved for the professional bona fides of human compliance officers.
“Financial criminals have started using AI at scale,” Castellum Co‑Founder and Chief Executive Peter Piatetsky told PYMNTS’ Karen Webster in a recent interview. “And regulators have become very aware of it, so there’s almost an arms‑race approach now.”
The exam victory arrives as the New York‑based startup closes an oversubscribed $8.5 million Series A round led by Curql, a venture fund backed by more than 130 credit unions including Navy Federal. Additional capital came from BTech Consortium — whose limited partners include Customers Bank — and Framework Venture Partners, which counts Royal Bank of Canada among its backers. Castellum plans to use the cash to enlarge its engineering staff, extend integrations into core banking systems and speed delivery of its product to midtier institutions that have historically struggled to keep pace with sophisticated fraud rings.

Building Around First‑Party Data
Castellum bills itself as the only financial crimes compliance platform with in-house risk data, AML/KYC screening and AI agents. Piatetsky, a former U.S. Treasury sanctions officer, began Castellum in 2019 with a contrarian premise: that the hardest part of compliance software is not workflow orchestration or flashy user interfaces, but reliable data.
“We started by building our own data pipeline,” he said. “Anything bad that happens in the world related to financial crime and reputational risk — we’re going to know about it.”
Instead of purchasing sanctions lists, politically exposed person (PEP) files or adverse‑media feeds from third parties, Castellum collects them directly from hundreds of government and press sources, many of them unstructured. PDFs in Japanese, Excel sheets from Balkan registries and machine‑translated court proceedings all get vacuumed up, standardized and enriched every five minutes, Piatetsky said. On top of that living database sits a screening engine, case‑management tools and, most recently, a family of task‑specific AI agents. Because the company owns the underlying content, the agent that cleared the CAMS practice exam can fetch supplemental facts without pinging outside vendors.
“The agent can go right back into the Castellum database and try to fill in the gaps,” he said. That closed loop, he argues, lowers cost, raises speed and keeps models from hallucinating.
Skeptics of agentic AI in compliance point to the risk that models amplify dirty inputs, creating faulty matches or, worse, false assurances. Piatetsky says the criticism is valid for tools that depend on recycled public datasets. His answer has been to become a data editor for the world’s regulators.
“We actually correct governments on a regular basis,” he said, flipping through examples of 1,900‑year‑old sanctioned individuals and Russian entities whose legal name was published merely as the Cyrillic equivalent of “LLC.” Castellum hosts an online “Department of Corrections” that publishes errors it has reported to the Treasury Department’s Office of Foreign Assets Control, the Commerce Department, Thailand’s Anti-Money Laundering Office and even the United Nations.
Regulators, once suspicious of black‑box AI, are warming to that approach. “It went from ‘Are you sure you need this?’ to ‘Oh yeah — you need this. Prove to us that it works,’” Piatetsky said. Castellum accommodates the new posture by giving every client its own model, trained on that institution’s policies, then tested on known alerts (open‑book) and unknown alerts (closed‑book). The results, which banks forward to examiners, often outperform offshore adjudication teams that take days to disposition cases, he said.
New Due‑Diligence Playbook
The rise of real‑time payments and instant‑settlement rails has forced compliance officers to value speed alongside thoroughness. “You need to be evaluating speed. … You have to come in on day one and start fighting fires,” Piatetsky said, contrasting his approach with systems that “sit in a bank and watch activity” for weeks before generating meaningful signals.
That urgency is no longer limited to the top tier. Fraud rings increasingly probe the defenses of community banks and credit unions, knowing that smaller shops may lack resources to monitor 24/7. Because Castellum’s AI can screen sanction lists and adverse media in milliseconds and write draft suspicious‑activity reports when it sees a probable hit, Piatetsky pitches it as “bringing the work back in‑house” for institutions that once outsourced level‑two reviews to large business‑process firms.
The pressure is most acute in digital‑asset markets, where transactions don’t clear online, they clear on‑chain. Castellum serves a top‑five crypto exchange and works with other stablecoin issuers that need sanctions and fraud checks to run as fast as their ledgers.
“Stablecoins are a fantastic real use case for financial services. It’s also very attractive to criminals,” he said. “Your anti‑fraud and anti‑money laundering solutions absolutely cannot be batched right now.”
The company’s pitch to crypto firms mirrors its message to banks: ingest everything, enrich it quickly, and let dedicated agents decide whether to file a suspicious‑activity report, block funds or request more documentation. Don’t wait for an overnight job.
From Regulator to Vendor
Piatetsky’s Treasury pedigree gives Castellum credibility in Washington and abroad. The startup already counts several government agencies among its paying customers, including a Canadian crown corporation and an Emirati ministry. That public‑sector exposure, combined with private‑sector feedback loops, allows the firm to spot flaws in official lists — and to mend them before criminals exploit the gaps or innocent customers get flagged.
“No one’s ever accused any government of being really data‑ and tech‑forward,” he quipped. “Hundreds of millions are spent on finding bad guys, and then almost no thought is given to how the data is presented to the public.” By transforming those releases in near real time, Castellum says it trims false positives for clients by as much as 90%.
Asked by Webster what he intends to do with the fresh capital, Piatetsky was blunt. “Our product works and our clients love it. … It’s time to scale,” he said. “That means more AI agents and more integrations.” The company will hire engineers to build connectors into core processors used by midsize banks and credit union service organizations, shortening sales cycles that now hinge on IT backlogs.
Looking out a year, Castellum aims to have its AI agent sit for the CAMS exam in a proctored setting and earn an official credential. It’s a public demonstration, Piatetsky hopes, that AI has moved from novelty to necessity in compliance circles. Regulators, for their part, are watching. An agent that can ace the same test demanded of human investigators may persuade them that the banks they supervise are finally ready to fight criminal AI with compliant AI of their own.
Source: https://www.pymnts.com/