Assessing the human element: Panelists weigh AI’s role in cybersecurity during HumanX

Cybersecurity professionals are grappling with the impact of AI and how to protect key systems and assets in an environment where the human element remains central to automation’s success or failure.

The conflict between pressure to embrace automation and the need to keep humans in the loop has become more pronounced in the cybersecurity world as AI adoption grows and security issues roll relentlessly onward.

With everything in security, there’s so much activity coming in, people’s jobs are maxed out, you’ve got skills training gaps,” said John Furrier (pictured, right), theCUBE Research industry analyst and co-founder of SiliconANGLE Media. “All of these are issues are happening today, and the AI is getting smarter. The bad guy has to be right once. If the company is wrong once, it’s over.”

Furrier’s remarks set the stage for a panel session he moderated, “Smart Security: Where AI Meets Human Insight,” on Monday at the HumanX AI conference in Las Vegas. He was joined by Kara Sprague (left), chief executive officer at HackerOne Inc.; Dean Sysman (second from left), CEO and co-founder at Axonius Inc.; and Nia Castelly (second from right), co-founder and head of legal for Checks, an AI powered legal compliance platform from Google LLC.

Building trust for the human element
Furrier led the panel on a discussion of governance and the need for increased compliance in the age of AI. The Checks compliance intelligence platform became a fully integrated Google product two years ago, and Castelly spoke about the importance of trust in building AI infrastructure.

“I think the money is going to be where the trust is,” she said. “It’s going to boil down to trust, and those who take that seriously, put in their own governance frameworks and think about this in a smart way early on are the ones who are going to win.”

The challenge for many in the cybersecurity field is to find ways to leverage AI for maximum effect while keeping humans in the loop. Sprague saw this as an opportunity to free security analysts from the burdensome work of monitoring threat traffic in security operations centers, or SOCs.

“Cybersecurity tends to move so quickly … it’s a great petri dish for where we can look at humans in the loop,” Sprague told the HumanX audience. “My hope is that we can start applying AI and more automation to unburden a lot of those SOC operators and make them much more effective.”

While AI may become effective in the near future for easing cybersecurity workloads, there is still the problem confronted today by many organizations of creating new vulnerabilities through the use of unsafe AI models. Axonius’ Sysman sees this as further evidence for employees to take full ownership of safe security practices.

“You start to see sprawl of model usage from other companies,” he said. “People start to go rogue; they use shadow AI. The security team needs to teach the organization, teach every single employee, how their part of the job in an organization matters from a cybersecurity standpoint.”

The HumanX panel session highlighted the ways in which cybersecurity is continuing to demand a closer examination of how AI will be implemented in the enterprise. There is a lot at stake for many companies in protecting against unwanted security holes as AI usage grows, and the cybersecurity field is providing instructive use cases for continued human involvement.

“Humans in the loop means something to cybersecurity,” Furrier noted. “Security is probably the best example where AI is on the cutting edge of key issues.”.

Source: https://siliconangle.com/