Criminals, good guys and foreign spies: Hackers everywhere are using AI now

This summer, Russia’s hackers put a new twist on the barrage of phishing emails sent to Ukrainians.

The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.

That campaign, detailed in July in technical reports from the Ukrainian government and several cybersecurity companies, is the first known instance of Russian intelligence being caught building malicious code with large language models (LLMs), the type of AI chatbots that have become ubiquitous in corporate culture.

Those Russian spies are not alone. In recent months, hackers of seemingly every stripe — cybercriminals, spies, researchers and corporate defenders alike — have started including AI tools into their work.

LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents.

The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster. Cybersecurity firms and researchers are using AI now, too — feeding into an escalating cat-and-mouse game between offensive hackers who find and exploit software flaws and the defenders who try to fix them first.

“It’s the beginning of the beginning. Maybe moving towards the middle of the beginning,” said Heather Adkins, Google’s vice president of security engineering.

In 2024, Adkins’ team started on a project to use Google’s LLM, Gemini, to hunt for important software vulnerabilities, or bugs, before criminal hackers could find them. Earlier this month, Adkins announced that her team had so far discovered at least 20 important overlooked bugs in commonly used software and alerted companies so they can fix them. That process is ongoing.

None of the vulnerabilities have been shocking or something only a machine could have discovered, she said. But the process is simply faster with an AI. “I haven’t seen anybody find something novel,” she said. “It’s just kind of doing what we already know how to do. But that will advance.”

Adam Meyers, a senior vice president at the cybersecurity company CrowdStrike, said that not only is his company using AI to help people who think they’ve been hacked, he sees increasing evidence of its use from the Chinese, Russian, Iranian and criminal hackers that his company tracks.

“The more advanced adversaries are using it to their advantage,” he said. “We’re seeing more and more of it every single day,” he told NBC News.

The shift is only starting to catch up with hype that has permeated the cybersecurity and AI industries for years, especially since ChatGPT was introduced to the public in 2022. Those tools haven’t always proved effective, and some cybersecurity researchers have complained about would-be hackers falling for fake vulnerability findings generated with AI.

Scammers and social engineers — the people in hacking operations who pretend to be someone else, or who write convincing phishing emails — have been using LLMs to seem more convincing since at least 2024.

But using AI to directly hack targets is only just starting to actually take off, said Will Pearce, the CEO of DreadNode, one of a handful of new security companies that specialize in hacking using LLMs.

The reason, he said, is simple: The technology has finally started to catch up to expectations.

Source: https://www.nbcnews.com/