The cat and mouse game of computer security has been on since at least the 1970s.
There was John Draper’s infamous hack of telephone systems using a toy whistle from a Cap’n Crunch cereal box in 1971. On the good guy side, Ray Tomlinson wrote what is regarded as the first antivirus software in 1972, when his Reaper program eradicated the Creeper worm that his colleague Bob Thomas unleashed in an effort by the two to demonstrate possible scenarios.
Fast forward to today. The action is non-stop, and the tools are more sophisticated. Step aside, plastic playthings and early cyberworms. The latest weapon is here: artificial intelligence.
And it’s available to all, in one form or another.
“Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding,” wrote the U.K. government’s National Cyber Security Centre in its January report, The near-term impact of AI on the cyber threat. “This trend will almost certainly continue to 2025 and beyond.”
With the “bad guys” having access to AI, it’s no wonder that cybersecurity vendors are not only busy adding AI-enabled products to their stables, but that they are also emphasizing the importance of keeping a leg up on the “threat actors” in their AI quest. Cybersecurity has always been a case of staying a step ahead of the darker forces.
Not that they wouldn’t be deploying AI anyway, but the take-up of AI by the opposition makes the use of AI all the more compelling in cybersecurity products.
“As much as AI is a boon to enterprises, it is also being highly leveraged by adversarial actors for evasive, unique, and damaging zero-day attacks that traditional security solutions cannot detect,” Santa Clara, CA-based security firm Palo Alto Networks said in one of two press releases last May in which it announced a blitz of AI products and methods, which the other release heralded as “AI-powered security that can outpace adversaries and more proactively protect networks and infrastructure.”
The language was strikingly similar to that used by Microsoft, which in April announced general availability of Microsoft Copilot for Security and in so doing implored users to “outpace adversaries.”
In an online AI explainer, Abingdon, U.K.-based computer security vendor Sophos noted, “Cybercriminal organizations have already invested in machine learning, automation, and AI to launch large-scale, targeted cyberattacks against organizations. The number of threats and potential for ransomware impacting networks continues to grow.” As Sophos further points out, “Hackers can leverage AI for malicious purposes, including generating convincing phishing emails and even building out malware.”
With the starting gun having sounded in the black hat vs. white hat AI race, consulting firm Deloitte forecasts that the AI cybersecurity market will grow to $102.78 billion by 2032.
But what do AI cybersecurity products do? And how much can we trust AI versus humans? What are the limits of AI in the cybersecurity field, and what are the risks?
Following are some key points at the start of an ongoing, open-ended discussion.
What do the products do?
Security vendors have been applying AI, including machine learning, to product development and services for at least a decade.
What has changed recently is their adoption of generative AI (Gen AI), artificial intelligence that can create things like code, text, images, and so forth, which became commercially viable around two years ago with the release of ChatGPT by San Francisco-based OpenAI.
For example, two of this year’s major AI-enabled security announcements, one by Redmond, WA-based Microsoft and the other by Palo Alto Networks, both placed a huge emphasis on their addition of Gen AI as a means for security operators to greatly speed up product development and deployment, and to vastly improve effectiveness, including spotting threats much, much faster. The two vendors both applied claims of faster-and-better across their stable of existing security products.
In Microsoft’s case, the company added its Copilot Gen AI tool to its suite of security products, calling it Microsoft Copilot for Security, and released it for general availability on April 1.
“The industry’s first generative AI solution will help security and IT professionals catch what others miss, move faster, and strengthen team expertise,” the company said at the time. “Copilot is informed by large-scale data and threat intelligence, including more than 78 trillion security signals processed by Microsoft each day, and coupled with large language models to deliver tailored insights and guide next steps. With Copilot, you can protect at the speed and scale of AI and transform your security operations.”
The company provided a lengthy list of quantified speed-ups among trial users, although the headline figures were not quite as startling as the rhetoric implied. “Experienced security analysts were 22% faster with Copilot,” Microsoft said. “They were 7% more accurate across all tasks when using Copilot.” The most telling figure, perhaps, was that, according to Microsoft, “97% said they want to use Copilot the next time they do the same task.”
Likewise, Palo Alto Networks emphasized speed and effectiveness in its May blitz of new AI-enabled products, all of which leverage a new proprietary system the company calls Precision AI, which includes generative features. The company now is building Precision AI into its three existing security offerings: Strata for network security, Prisma for cloud security, and Cortex for security operations centers. With Precision AI onboard, those products were renamed Strata Copilot, Prisma Cloud Copilot, and Cortex Copilot.
“Generative AI in cybersecurity significantly bolsters the ability to identify and neutralize cyber threats efficiently,” Haider Pasha, Senior Cybersecurity Expert at Palo Alto Networks, explained in an email exchange. “By leveraging deep learning models, this technology can simulate advanced attack scenarios crucial for testing and enhancing security systems. This simulation capability is essential for developing strong defenses against known and emerging threats. Additionally, generative AI streamlines the implementation of security protocols by automating routine tasks, allowing cybersecurity teams to focus on more complex challenges.”
It’s not hard to find similar claims among other computer security vendors. In an online explainer, Sophos cited AI’s ability to quickly analyze large amounts of data. “The potential of leveraging AI in cybersecurity is virtually endless,” Sophos said. “The speed and accuracy of threat detection and response is as close to real-time as possible. AI can help minimize the impact of a ransomware attack by flagging suspicious behavior to your security team as soon as possible. And finally, AI makes cybersecurity operations more efficient through automation, freeing up your security team’s valuable time and resources to work on other, more important tasks.”
What will they really be used for?
While the potential for AI to improve cybersecurity seems obvious, analysts caution against certain risks, and also note that AI deployment could apply to some areas of security but not to others, where human and auditable actions could well remain preferable.
“There’s a lot of opportunities for automation of security tasks,” said David Clemente of International Data Corp. (IDC). “There are a lot of repetitive and tedious security tasks, particularly in a security operation center.”
AI can improve recognition of suspicious attempts at logging in, for instance. “It’s not always as clear-cut as someone trying to log into a corporate device from North Korea, so some of those tasks can be automated, and a lot of security companies are looking to use AI to automate some of that work,” said Clemente, who is IDC’s Milan-based research director of European cloud security.
Clemente said he also sees good prospects for using Gen AI in such products as GitHub Copilot from San Francisco-based GitHub Inc. to write code for security programs.
Michelle Abraham, IDC’s research director for security and trust, noted that while the vendors’ claims do indeed sound promising, there is so far a dearth of case studies to back up their assertions. “It’s early days,” Abraham said. “A lot of the products have been announced (just) in the last year. So there aren’t huge amounts of people using them—yet.”
Risks and limitations
For all the potential, there are also concerns, and areas where enterprise security departments might want to avoid AI.
IDC’s Clemente pointed out that having Gen AI automate incident reports and actions could deprive reviewers of an audit trail.
With lower levels of AI or automation, Clemente said, “You could go into the tool and have an understanding of why it made the automation choice it made, whereas with Gen AI, you don’t have that option; an auditor would probably say there’s no adequate explainability.” Further, he said, “With a lot of Gen AI tools, there’s no explainability. If forensic evidence or response documentation has been summarized by a Gen AI tool, you have to really be able to trust that. It wouldn’t take many missteps for auditors or a compliance team or a regulator to raise uncomfortable questions about how those AI tools were being used.”
Vendors are aware of this troubling prospect. One potential response would be to run regular audits on the AI systems themselves, noted Palo Alto Network’s Pasha. “Regular AI audits are vital for maintaining accountability and ensuring that AI systems operate as intended,” he said. “These audits involve a systematic review of AI models, data, and processes to identify potential issues and ensure compliance with ethical and regulatory standards.” On a related note, there is the risk that AI-enabled security systems could provide false responses—the industry calls them “hallucinations”—that could take or instigate action when not necessary or, conversely, fail to take action when required. One reason for this: AI systems are built on pre-existing data that might be insufficient or misleading for a new set of circumstances.
Again, this is not lost on the vendors.
“AI models are only as good as the data they’re trained on,” said Pasha. “If the training data is incomplete, unbalanced, or biased, the model may produce skewed results. For example, AI might be biased toward detecting threats from certain regions or types of attacks while overlooking others, leading to gaps in protection.”
Sophos concurs. “AI-powered security systems rely on machine learning algorithms that learn from historical data,” the company noted. “This can lead to false positives when the system encounters new, unknown threats that do not fit into existing patterns.”
In addition to the datasets, other factors could also compromise the effectiveness of AI-enabled security.
“Some AI-enabled cybersecurity tools may need to integrate with an organization’s existing infrastructure, which can be complex and resource-intensive,” Pasha said. “Misconfigurations during integration can introduce new vulnerabilities, or the AI system may not function optimally due to compatibility issues.”
All of which are a good reminder of the general human vs. machine discussion running across all of society and business in general.
“We’re trying to find that dividing line between which tasks can be done by a person, and what can be automated,” said IDC’s Clemente. “The hard part is knowing when to have a person in the loop. Security is often time-sensitive. As the time sensitivity of a situation increases, the number of parties we’re willing to trust with our affairs dwindles to a small handful. And it’s the same way with security tools. You need those tools to be reliable, and to not hallucinate in a time-critical situation.”
That receives no argument from the vendors.
“It’s important to remember that AI as a technology is still in its early days,” said Sophos. “AI still requires human intervention, not only to train AI engines but to step in if an engine makes a mistake.”
In other words, even though the cat and mice have now written AI into their playbooks, the cybersecurity game still involves people.
At least for now.
Mark Halper is a freelance journalist based near Bristol, England. He covers everything from media moguls to subatomic particles.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment