Shadow AI: How French Employees Are Secretly Harnessing Artificial Intelligence in the Workplace

In recent years, employees have begun to leverage artificial intelligence in informal and often clandestine ways to enhance their productivity and creativity. They are rewriting emails with tools like ChatGPT, translating technical documents, structuring presentations, or conducting research using AI-powered platforms. This phenomenon represents a new challenge for organizations, akin to the well-known concept of Shadow IT, but now evolving into what can be termed as Shadow AI.

An Increasingly Pervasive Hidden Practice

This covert set of practices is quietly proliferating within companies, often without the oversight or awareness of management hierarchies. Recent research conducted by Inria and DataCraft estimates that a significant portion of the French workforce is involved in these informal AI usage practices. The study involved 14 leading French companies, including major players like Airbus, France’s National Health Insurance Fund (Assurance Maladie), L’Oréal, Crédit Agricole, Montpellier University Hospital, MAIF, and the Ministry of Armed Forces. The findings reveal that employees are informally adopting AI tools driven by a desire to achieve greater efficiency, foster creativity, and gain autonomy in their roles.

The disconnect between strategic AI investments and actual usage

One of the key observations from the study is that despite a decade of investments in Proofs of Concept (PoC), only about 20% of AI initiatives have progressed to full-scale industrial deployment. This gap often results from mismatched infrastructure, low-quality data, unclear objectives, and a disconnect from real-world work practices. Top-down approaches, primarily focused on streamlining and standardizing processes, tend to overlook the nuances of daily operations and tacit skills. Such strategies frequently lead to employee resistance or disinterest, as they fail to align with the realities of on-the-ground work.

A diverse landscape of AI usage—Visible and Invisible

Contrasting sharply with formal initiatives, informal AI practices develop precisely because they meet immediate needs and emphasize worker autonomy. Employees deploy generative AI for a wide range of tasks, often through a bricolage approach—creative improvisation that involves experimenting with available resources, iterating, and continuously adjusting. This adaptive process has elevated prompt engineering and validation into essential competencies, as employees learn to craft effective commands and verify outputs.

The reality of Shadow AI encompasses various levels of covert practice. Some employees might be using these tools in total secrecy—such as one individual who reportedly used AI directly with real patient data and sensitive information, without safeguards—while others operate within a tolerating environment where trust in employees is implicit. This spectrum includes more insidious examples, like AI functions embedded into official software platforms without explicit acknowledgment, or usages hidden from clients to protect the perceived value of human labor (“Clients mainly buy time, not AI alone”).

Risks and Challenges of Shadow AI

Despite its potential benefits, Shadow AI carries inherent risks. Its unchecked proliferation can jeopardize organizational security, compliance with regulations, data governance, and technological sovereignty. The use of unverified tools may lead to the transfer of sensitive information to external servers, introduce biases, or produce factual inaccuracies, thereby compromising both data confidentiality and work quality.

From the employee perspective, engaging in unofficial AI practices can lead to moral discomfort or psychological pressure. The study highlights a strong duality: for users, AI can be both a powerful efficiency booster and a source of cognitive, emotional, and professional destabilization. This ambivalence underscores the importance of carefully managing and integrating AI technologies into the workplace.

“Shadow AI can represent a strategic opportunity if properly supported.” This informal experimentation outside formal structures could serve as a catalyst for redefining professional practices, transforming bricolage into a driver of innovation and improvement.

Four Strategic Recommendations to Harness Shadow AI

To turn these informal practices into legitimate strategic advantages, organizations should adopt a proactive approach guided by four key principles:

  1. Collectively Negotiating the Transition from Shadow AI
    Organizations need to foster a gradual legitimate framework for AI usage. This involves understanding current practices (“pilot” phase), creating spaces for open discussion (“sharing”), and establishing a secure environment with validated tools and legal compliance (“securing”). This three-step process emphasizes starting from actual users’ behaviors rather than imposing prescriptive restrictions.
  2. Facilitating Collaborative Workshops on AI and Work Quality
    Workshops aiming to discuss and co-define what quality work entails in an AI-augmented environment can help establish shared criteria. Moving beyond technical standards, these sessions should explore what constitutes good practices and outcomes in the evolving landscape of AI-supported work.
  3. Building a Trust-Based Framework
    The solution isn’t outright banning informal AI use nor unrestricted access. Instead, organizations should develop a clear, flexible, and evolving trust framework. This involves explicitly defining permissible data and use cases, outlining responsibilities, and implementing lightweight supervision mechanisms. Engaging employee representatives and stakeholders in co-creating this framework enhances legitimacy and acceptance.
  4. Implementing Holistic Training Programs
    Skills related to generative AI extend beyond technical proficiency. Training should include understanding the limitations of models, critically evaluating generated content, and exercising professional judgment about what can responsibly be delegated to AI systems. A comprehensive curriculum will prepare employees for ethical and effective AI integration.

The Inria and DataCraft study titled “Generative AI in the Workplace: Supporting and Securing Employee Initiatives” offers more insights into these issues. It underscores that guiding informal AI practices from a strategic perspective can unlock significant benefits while minimizing risks.

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.