Expert Op-Ed: The Three Key Levers for Cybersecurity Teams Facing […]

Cybersecurity teams are already prepared for insider threats. Indeed, employees, trusted subcontractors, or partners can hold privileged access and act maliciously or abusively. Yet, in this area, a new form of threat is emerging that is not necessarily malicious or human. Thus, agency AI systems, designed to automate certain decision-making tasks in place of humans, highlight the flaws of traditional authorization frameworks.

Security and IT teams must therefore anticipate risks with clarity and act accordingly. But one must first understand how these systems pose a potential danger to authorization systems.

What Works for Humans Doesn’t Always Translate to AI Agents

Authorization, or AuthZ, limits resource sharing for each user to what is explicitly allowed.

However, AuthZ systems cannot anticipate all scenarios. Most existing systems are modeled around threats and rely on the assumption that external factors such as legal frameworks, social norms, or the fear of sanctions will constrain inappropriate human behavior.

Consequently, it is generally not problematic for an AuthZ system to grant extra access and over-provisioning is a tolerated or even banalized practice. Thus, a new employee is often given the same privileges as their predecessor, without thorough verification. Up to now, this approach has not typically posed major problems, because most people do not exploit these extra accesses. In this context, they know that any abuse or violation of company rules can have consequences (loss of employment, loss of the employer’s trust, or even imprisonment).

AI agents, for their part, do not have this ethical sense.

The Rise of Chaos Agents

Agency AI systems are semi-autonomous agents capable of collaborating to accomplish complex tasks. Their strength lies in their ability to explore a wide range of solutions and optimize their execution.

But this algorithmic efficiency comes with unpredictability on their part, especially in complex environments where multiple systems interact. To fulfill their mission, they may create processes that no one anticipated, sometimes brilliant, but sometimes deeply disruptive.

By definition, emergent behaviors of AI agents escape governance based on human rules, which are shaped by our expectations of human conduct. In creating agents capable of discovering their own ways of working, one must expect them to engage in actions humans never anticipated.

As a result, agents acting on behalf of humans could begin exposing access rights and roles of those users. Freed from social norms, AI agents could have harmful effects on the business.

For example, an AI tasked with building a solution for a user’s payment process might start generating code guided by optimization logic and deploy in production code that interrupts AWS or Google Cloud services it deems irrelevant to its mission (but which are essential to other aspects of the business), or introduce instability into a set of otherwise stable systems.

Governing AI Agents: Three Priorities for Robust Governance

Yet security teams can prevent this chaos by proactively adopting best practices. Responsible governance will make all the difference, and companies can initially focus on the following key points:

1. Multiple Identities: today, authentication (AuthN) and authorization (AuthZ) systems do not distinguish between human users and AI agents. When AI agents act, they do so on behalf of human users or use an identity assigned to them under a human-centric AuthN/AuthZ framework.

This complicates the process of answering previously simple questions, such as: Who wrote this code? Who launched this merge request? Who created this Git commit? It also raises new questions, such as: Who asked the AI agent to generate this code? What context did the agent need to create it? What resources did the AI have access to? Composite identities allow these questions to be answered. A composite identity links the AI agent’s identity to the human user who guides it. Thus, when an AI agent attempts to access a resource, it can be authenticated, authorized, and tied to the responsible human user.

2. Comprehensive Monitoring Frameworks: DevSecOps teams need ways to monitor AI agents’ activities across multiple workflows, processes, and systems. It isn’t enough to know what an agent does in a codebase, for example. You must also monitor its activity in test and production environments, in the associated databases, and in all applications it can access.

It’s possible to imagine a world in which organizations run autonomous resource information systems (ARIS) alongside existing human resources information systems (HRIS), preserving profiles of autonomous agents, documenting their capabilities and specializations, and managing their operating limits. The beginnings of such technologies are visible in large language model data management systems like Knostic, but this is only the start.

3. Transparency and Accountability: with or without sophisticated monitoring frameworks, organizations and their employees must be transparent when using AI and establish clear accountability structures for autonomous AI agents. They should regularly review the agents’ actions and outcomes, and, more importantly, someone must be held responsible if the agent oversteps its bounds.

Deploying Agents Responsibly

AI agents will push beyond the limits of existing AuthZ systems, but that does not mean they must become chaotic agents sowing disruption across a company. Emerging technologies often challenge established security practices. We cannot foresee the unknown, and that is okay. This situation can be likened to the shift to cloud computing during the last wave of technological evolution. Security frequently lags behind innovation, and the path forward requires balance. Responsible adoption begins with embracing emerging best practices. Agents do not have to sow chaos if appropriate governance frameworks are put in place now.

* Josh Lemos is CISO at GitLab

Related Topics

See all cybersecurity articles

SIEM Elastic and SOAR in-house: how the University of Grenoble […]

By
Clément Bohic

5 min.

AWS patches a software supply chain vulnerability […]

By
Clément Bohic

ToolShell: the situation one week after the fixes

By
Clément Bohic

ToolShell: this SharePoint vulnerability that quietly built up over time

By
Clément Bohic

Between predictive and generative AI, cyber solutions are balancing

By
Clément Bohic

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.