IFOP’s 2025 barometer for Talan reveals that 43% of generative AI users do so in a professional setting. 52% of corporate users are encouraged to do so, but only 15% have received training and only 9% of employees have tools provided by their organization.
Failing to seize the topic will mechanically push employees toward what is now called Shadow AI. A study conducted in the fourth quarter of 2024 by Harmonic Security analyzing prompts sent to the leading LLMs showed that 45.8% of requests risked exposing customer data, notably related to billing and authentication, and 26.8% of prompts contained HR data related to payroll and personal identifiers…
In addition to a necessary awareness and training effort, CISOs must implement tools to prevent as much as possible any data leakage arising from these new uses. This Shadow AI phenomenon primarily raises the issue of visibility: knowing who is doing what within the organization.
A security vendor specializing in discovering a company’s digital assets, Tenable, has integrated this data-leakage issue via AI: “Our platform covers two major AI-related use cases: on the one hand Shadow AI and what is called AI SPM (Security Posture Management),” explains Bernard Montel, EMEA Chief Technology Officer and Strategist at Tenable. This module aims to assess the exposure level of AI in the cloud, and to accelerate its development in this area, Tenable has just completed the acquisition of Apex, a company specializing in these AI use cases.
For Xavier Daspre, CTO of Proofpoint, many companies have opened the gates to generative AI and must now equip themselves to know whether their employees are disseminating confidential information to these services.

The publisher is working on three vectors: email, its historic domain, the CASB for protecting Cloud applications, and endpoint protection. “These two last vectors enable coverage of GenAI-related use cases. The solution will analyze the data to identify, for example, personally identifiable information and anonymize the document.”
Proofpoint completed the acquisition of Normalyze in October 2024 and gained its DSPM (Data Security Posture Management) solution. This identifies the LLMs used by employees and analyzes in real time the data that flows to their APIs.
SASE Delivers a Global Solution
Having its roots in CASB, Netskope naturally turned its attention to protecting against data leaks directed at LLMs.
“Originally, all generative AIs such as ChatGPT, Gemini and Copilot were SaaS applications; yet we already had the tools needed to intercept, inspect and enforce policies on SaaS traffic,” argues Ray Canzanese, Director of Netskope Threat Labs. “We could already detect threats and breaches of data policy and, for example, guide users who are using unauthorised AI solutions toward approved ones.”
Proofpoint has extended its Cloud Security Posture Management offering to AI and is entering the AI-SPM space, competing with offerings from Palo Alto Networks, Tenable, or Wiz.
—————————–
Adrien Porcheron, General Manager France of Cato Networks“We must take data encryption into account.”
|
———————————
Ray Canzanese, Director of the Netskope Threat Labs“To date, we have studied more than 1,500 distinct Gen AI applications.”
|
“We have thus launched at the beginning of the year two new features related to our DLP module to meet the needs of clients who must manage the adoption of AI in their organization. Those who do not classify all of their data risk having their employees send confidential information to AI and exposing sensitive data to the outside world. A further aspect to consider is data encryption. Internet traffic is now largely encrypted, yet traditional DLP systems do not have the capacity to decrypt TLS inline. These uninspected flows constitute a major risk, since the company controls only a small portion of the traffic. We perform this encryption natively within the platform, and we can control all resources exchanged by the company with the outside world without needing to deploy new resources.”
“With our DSPM, we had all the necessary elements to secure generative AI services as they began to arrive on the market. We only had a small delta to develop to cover them. This gave us a head start to cover this space comprehensively. Our Cloud Confidence Index (CCI) database catalogs all SaaS services, and when all these AI applications hit the market, we simply added them. To date, we have studied more than 1,500 distinct Gen AI applications. The advantage this gives us is that if you are a Netskope customer, you have a very easy portal to see which of these applications are used by employees, how they use them, who uses them, and where they access them from. It then becomes very easy to implement controls to limit what they are able to do. Some applications can be blocked, others limited to private use.”