Google Identifies New Forms of AI-Powered Cyberattacks

State-sponsored threat fraudsters have developed malware powered by artificial intelligence (AI) that can not only generate malicious scripts but also “change its code on the fly” to get around detection systems, Google Threat Intelligence Group (GTIG) said in a Wednesday (Nov. 5) blog post.

GTIG said in a report released Wednesday that this is the first time it has seen malware families use large language models during execution.

“While still nascent, this represents a significant step toward more autonomous and adaptive malware,” the report said.

This is one example of the ways threat actors are using AI not only for productivity gains but also for “novel AI-enabled operations,” GTIG said in its blog post.

The criminal groups are also using pretexts like posing as a student or researcher in prompts to bypass AI safety guardrails and extract restricted information, and they are using underground digital markets to access AI tools for phishing, malware and vulnerability research, according to the post.

“At Google, we are committed to developing AI responsibly and take proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the company said in the report. “We also proactively share industry best practices to arm defenders and enable stronger protections across the ecosystem.”

PYMNTS reported Monday (Nov. 3) that AI has become both a tool and a target when it comes to cybersecurity. For example, CSO.com said that agentic AI is emerging as a transformative force in cybersecurity because it can process data continuously and react in real time to detect, contain and neutralize threats at a scale and speed that human teams cannot match.

It was also reported Monday that tech companies are increasing their efforts to combat a security flaw in their AI models. The companies are focused on stopping indirect prompt injection attacks in which a third party hides commands inside a website or email to trick AI models into turning over unauthorized information.

The PYMNTS Intelligence report “COOs Leverage AI to Reduce Data Security Losses” found that chief operating officers are adopting generative AI-driven solutions to improve cybersecurity management at a time when companies face the threat of cyberattacks that are growing more sophisticated.

Source: https://www.pymnts.com/