According to the 2026 Data Security Index from Microsoft Security, one third (32 %) of reported data security incidents involve the use of generative AI tools. More revealing still: according to Harmonic Security’s AI Usage Index (January 2026), 22 % of files sent to AI services contain sensitive information — source code, financial data, M&A operations.
The phenomenon has a name: Shadow AI. Like Shadow IT, it refers to AI tools used without validation by the IT department. In response to this reality, where 86 % of leaders now prefer integrated platforms to reduce alerts, Bechtle Comsoft offers companies a structured support around Microsoft Purview.
A Threat Landscape Intensifying with AI
For Yacine Drid, the threat is very real. “Data exfiltration has become a paramount issue. A HR colleague who handles personal data, for example, without securing it, becomes a major vector of compromise.” According to the 2026 Data Security Index, the majority of workers use AI, and more than 70 % of them rely on their own AI tools — the BYOAI (Bring Your Own AI) phenomenon — without the organization mastering the flows. This risk is amplified by the use of personal accounts to access AI at work, a practice affecting 58 % of employees and which has risen by 5 points in a year.
Yacine Drid illustrates the risk with a concrete case that occurred the week prior to our interview. “We detected a developer submitting the source code of an application to ChatGPT and Copilot. The intention was legitimate: to accelerate production. But this code could contain sensitive information about the system architecture. The challenge is not to ban AI — it is to control and govern it.” Yet, only 47 % of organizations have already implemented specific controls for generative AI to date.
Microsoft Purview DSPM: Visibility, Detection, Blocking
Microsoft has integrated into Purview a dedicated feature: the DSPM, for Data Security Posture Management. This strategy has become a priority for more than 80 % of organizations surveyed according to the 2026 Data Security Index from Microsoft Security. This tool provides administrators with full visibility into interactions between users and AI tools — Copilot, ChatGPT, Mistral, or any other service detected on the network.
“From the Purview console, we visualize in real time which AI tools are in use, by whom, and what information is being submitted. We can inspect prompts, identify the types of sensitive data — the SIT, Sensitive Information Types — and immediately block any non-compliant transfers,” details Yacine Drid. The solution relies on more than 320 predefined models that can be complemented by organization-specific business rules.
Audit, Acculturation, Compliance: A Three-Step Approach
Make no mistake. Equipping your business with a powerful technological arsenal is not enough. Bechtle Comsoft has structured its support into three phases: audit of the existing environment, acculturation of the teams, then deployment of compliance and blocking rules.
“Before any restrictive policy, everyone must understand what is at stake when confidential information is submitted to an AI service. It is also a matter of transparency: employees must know that administrators can review prompts.” This approach fits a global trend where 82 % of organizations have already planned to integrate generative AI into their own security operations to gain efficiency.
In practice, there are two possible postures that Yacine Drid sums up: “You can either deploy AI and fix the flaws as you go, or first build the data governance strategy. We obviously recommend the latter. Especially since 88 % of decision-makers plan to increase their cybersecurity budget next year to face these challenges.” This approach responds to growing regulatory requirements — AI Act, NIS2, GDPR — which require documenting and controlling AI usage. In this risk economy, the tools exist. What is still missing, as Yacine Drid reminds us, “a security flaw can destroy jobs, companies, lives” — and with a bit of method, it is avoidable.
The real question — the one every leader should pose today — is simple: how much will the acculturation you have not yet launched cost you in the long run?