Generative AI does not create new forms of attacks; it industrializes them. This is one of the major findings of the 2025 edition of the “Cyber Threat Panorama” by ANSSI.
The agency identifies several direct effects. First, an increase in the quality of lures used in phishing campaigns.
Grammatical and stylistic errors that once allowed a wary user to spot a malicious email tend to disappear. AI helps produce more convincing content, in greater numbers, with increased diversity and at a lower cost.
ANSSI also notes a reduction in the cost of maintaining attack infrastructures, lowering the entry barrier for less sophisticated actors.
Malicious sites indistinguishable to the naked eye
One of the most tangible signals identified by the agency concerns the creation of legitimate-looking websites, entirely generated by AI systems.
These sites are used to host malicious payloads or to perform what ANSSI calls characterization, in other words, the technical profiling of visitors before compromising them.
How did the agency’s teams detect the artificial nature of these sites? Through a telltale anomaly: the insertion of incoherent texts in the middle of paragraphs, with no logical connection to the surrounding content. A subtle sign, which confirms that human vigilance remains, for the moment, a crucial link in detection.
The vicious circle of training data contamination
Generative AI does not only turn against end users; it also threatens the integrity of the models themselves.
ANSSI identifies here a systemic risk: the proliferation of deceptive content on the Internet eventually contaminates the data sets used to train future models.
The mechanism is simple. Large language models learn from data available on the web. If this data is massively polluted by artificial and erroneous content produced by other AIs, whether for malicious purposes or not, next-generation models will incorporate these biases and inaccuracies into their responses.
According to the national agency, malicious actors deliberately exploit this vector: by flooding the web with fabricated content, they seek to alter the behavior of AI services to distort their results. Generative AI services have thus become high-priority targets in their own right.
AI in the enterprise: an expanding attack surface
The growing integration of AI into corporate operational flows naturally broadens the attack surface, and the consequences of a breach can be severe.
The agency identifies several categories:
Data confidentiality and integrity. A compromised AI system can serve as an entry point into the rest of the information system, with risks of exfiltration of sensitive data or compromising the integrity of connected IT systems.
Software supply chain. This may be the most structural risk identified by ANSSI: compromising a system specialized in code generation could introduce vulnerabilities or backdoors in produced code, without the knowledge of development teams. A new form of supply chain attack, silent and hard to detect.
Reputational and economic risks. Any data breach related to an AI system threatens client trust, with potentially existential implications for some organizations.
The ANSSI recommendations: isolate, monitor, audit
To address these risks, ANSSI published a guide dedicated to securing generative AI solutions based on large language models (LLMs).
The main principles are as follows.
> Isolation. This is the central principle. The agency recommends physical or functional isolation of AI systems to prevent a breach from propagating. For software whose design is not fully under the organization’s control, the recommendation is clear: deploy them on an isolated and dedicated workstation.
> Flow monitoring. Isolation alone is not enough. Active monitoring of information exchanges among AI components is necessary to detect any behavioral anomalies.
> Thorough audits. ANSSI discourages audits with a narrow perimeter, which can leave hidden paths of compromise between the AI environment and the office IT (IT bureau) environment.
> Do not rely solely on tools. The agency points out a significant limitation: a security strategy that rests exclusively on tools—EDR, MFA—is insufficient. Attackers learn to bypass these tools or inject themselves directly into legitimate user sessions.
> Prepare crisis management. In case of an incident, the priority must be the immediate isolation of compromised systems, combined with revoking the attacker’s access. This sequence should be anticipated in continuity plans (BCP) and disaster recovery plans (DRP).
The rapid evolution of AI use cases demands, according to ANSSI, regular threat reassessment. A warning that speaks to both CISOs and general management: AI is no longer merely a productivity tool; it has become a risk vector in its own right, requiring governance of security tailored to it.