OpenAI Fixes Flaw That Left Gmail Data Vulnerable

OpenAI

OpenAI has reportedly fixed a ChatGPT security weakness that left user email data vulnerable.

That’s according to a report Thursday (Sept. 18) by Bloomberg News, citing researchers at cyber firm Radware.

The issue was discovered in DeepReseach, a ChatGPT agent introduced in February to help users analyze large swaths of information, the report said. The flaw could have allowed hackers to steal sensitive data from corporate or personal Gmail accounts.

Radware found the vulnerability, and researchers said there was no sign that attackers had exploited it. OpenAI informed Radware it had patched the vulnerability on Sept. 3.

An OpenAI spokesperson told Bloomberg the safety of the company’s models was important, and it is continually improving standards to protect against such exploits.

“Researchers often test these systems in adversarial ways, and we welcome their research as it helps us improve,” the spokesperson said.

The report notes that while hackers have used artificial intelligence (AI) tools to carry out attacks, Radware’s findings are a relatively rare example of how AI agents themselves can be used to steal customer information.

Pascal Geenens, Radware’s director of threat research, said that the intended targets here wouldn’t have needed to click on anything in order for hackers to steal their data.

“If a corporate account was compromised, the company wouldn’t even know information was leaving,” he said.

Google said earlier this year that it was developing autonomous systems that can identify and respond to threats in real time — in many cases with no human intervention.

“Our AI agent Big Sleep helped us detect and foil an imminent exploit,” Sundar Pichai, the tech giant’s CEO, wrote in a post on the social platform X. “We believe this is a first for an AI agent — definitely not the last — giving cybersecurity defenders new tools to stop threats before they’re widespread.”

As PYMNTS wrote soon after, this new reality may pose new questions for business leaders, especially chief information security officers (CISOs) and chief financial officers (CFOs).

“Are enterprise organizations ready for defense at machine speed? What’s the cost of not adopting these tools? Who’s accountable when AI systems take action?”

For CISOs, this means the emergence of a new category of tools, ones that are AI-first threat prevention platforms that don’t wait for alerts but look for weak points in code, configurations or behavior, and automatically take action.

“For CFOs, it signals a change in cybersecurity economics,” PYMNTS added. “Prevention at this scale is potentially cheaper and more scalable than the human-powered models of the past. But that’s only if the AI is accurate and accountable.”

Source: https://www.pymnts.com/