Balancing Predictive and Generative AI in Cybersecurity Solutions

A proprietary LLM hosted by its vendor: this is the preferred approach to inject GenAI into incident detection and response. At least on the side of cybersecurity solution vendors. ANSSI noted this after speaking with 18*.

Seven of them follow this model. The others rely on open-source LLMs, hosted either by themselves (5), by a cloud provider (4) or on the client’s premises (1).

The balance is different with predictive AI. Internally developed models dominate (16 vendors, including 10 that use the cloud at least partially for hosting). EDRs are an exception, with a clear tendency to host the models at the customers’ premises, on endpoints.

GenAI or PredAI?

Predictive AI is mainly used for:

  • Analyzing user and device behaviors
  • Detecting network anomalies (traffic volume, latency, protocols…)
  • Prioritizing incidents
  • Detecting malware

ANSSI assigns a strong added value to detecting deviant user and asset behaviors as well as malware detection. The same applies to detecting malicious PowerShell scripts and the spread of ransomware.
The detection of volumetric anomalies falls into the category “moderate added value.” Likewise for detecting domain-generation algorithms.
Detections of latency anomalies are in the range of “low added value,” as are data exfiltration detections and lateral movement attempts.

For the qualification phase, automated triage and prioritization fall under moderate added value; alert enrichment and kill-chain reconstruction are high value-added. In the realm of investigation, ANSSI assigns low added value to query generation… and medium value to chatbot assistance as well as to action suggestions.
On the response side, remediation-action orchestration carries the strongest added value. By contrast, action suggestions (medium) and incident-report generation (low).

Regarding the build:

  • High added value for SIEM migration, generation of attack-simulation data, development of parsers and automatic extraction of IOSc
  • Low added value for automatically changing AI-model parameters, creation of zero trust rules, guidance in using solutions, assistance in generating playbooks, generation of rules/playbooks and environment analysis to improve security posture

High-maturity uses have low added value

In the surveyed sample, GenAI usage is almost entirely concentrated on the build portion (8 vendors). The other uses are covered by PredAI (detection: 13 vendors; investigation: 2 vendors; response: 3 vendors; qualification: 3 vendors… in hybrid with GenAI).

Low-value use cases are currently the most mature. At the top of the list is support for using solutions and generating queries. Autonomous generation of playbooks and rules sits at the other end of the spectrum, as do alert qualification and autonomous orchestration of response tools.

LLM customization is systematically done via RAG (no retraining).

* Custocy, Darktrace, Elastic, Exabeam, Extrahop, Gatewatcher, HarfangLab, Microsoft, Mindflow, Nozomi Networks, Nucleon Security, OGO Security, Parcoor, Qevlar AI, Sekoia, Sesame IT (now Jizö AI), Tehtris and Trellix.

Related topics

See all Cybersecurity articles

With CyberArk, Palo Alto aims for a new acquisition in the […]

By
Clément Bohic

4 min.

AWS closes a software supply chain flaw […]

By
Clément Bohic

ToolShell: the situation one week after the patches

By
Clément Bohic

{ Expert Column } – The three levers of teams of […]

By
Josh Lemos *

ToolShell: this SharePoint flaw that built up over time

By
Clément Bohic

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.