Anthropic Warns of ‘Sophisticated’ Cybercrime Via Claude LLM

Anthropic is warning of the growing use of artificial intelligence (AI) in cybercrime.

“Agentic AI has been weaponized,” the company wrote Wednesday (Aug. 27) in an announcement accompanying its Threat Intelligence report.

“AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out.”

The report details examples of Anthropic’s Claude AI model being used for illicit means, including a recent case in which a “sophisticated cybercriminal” employed Claude Code to commit “large-scale” theft and extortion of personal data.

This scammer targeted at least 17 organizations including healthcare, emergency services, and government and religious institutions, and threatened to expose the data to try to extort victims into paying ransoms of more than $500,000.

“The actor used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks,” the company wrote.

The AI was allowed to make tactical and strategic decisions, like deciding which data to exfiltrate, and how to create “psychologically targeted” extortion demands. Claude also determined “appropriate” ransom amounts using the stolen data, and “generated visually alarming ransom notes.”

Anthropic said it banned the accounts in question upon discovering the operation, and has developed new screening and detection tools. The extortion operation, the company said, marks “an evolution in AI-assisted cybercrime,” with agentic AI being used to carry out attacks that would otherwise require a team of people.

“We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime,” the company said.

Writing about the use of agentic AI in cybersecurity last month, PYMNTS argued the technology has created a new reality that could pose new questions for chief information security officers (CISOs) and chief financial officers (CFOs).

“Are enterprise organizations ready for defense at machine speed? What’s the cost of not adopting these tools? Who’s accountable when AI systems take action?” the report said.

There was a time, PYMNTS wrote, that “zero-day vulnerabilities” — unknown security flaws in software or hardware — were discovered by adversaries first. Now, AI agents can flag high-risk issues before anyone knows they existed.

For CISOs, this means the rise of a new category of tools: AI-first threat prevention platforms that don’t wait for alerts but look for weak points in code, configurations or behavior, and they automatically go on the defense.

“For CFOs, it signals a change in cybersecurity economics,” PYMNTS wrote. “Prevention at this scale is potentially cheaper and more scalable than the human-powered models of the past. But that’s only if the AI is accurate and accountable.”

Source: https://www.pymnts.com/