Expert Opinion: AI Agents – The Missing Link in Zero…

The accelerated adoption of artificial intelligence is profoundly reshaping technology systems. From copilots embedded in business tools to automated workflows driven by autonomous agents, AI is no longer limited to a mere application service. It now acts as an operational player within the information system, capable of executing actions, querying third-party systems, and even handling sensitive data. An emerging challenge for Zero Trust architectures.
This evolution gives rise to a new category of digital identities: non-human identities tied to AI agents. API keys, service accounts, OAuth tokens, or machine-to-machine identities become the interaction vectors between models, applications and infrastructure. But how should these new actors be integrated into existing trust models?
For years, the Zero Trust paradigm has stood as a reference for securing information systems. Its principle is simple: do not trust any identity by default and verify every access in context and according to risk level.
However, Zero Trust was initially designed to manage human identities and user devices. Yet, in an AI-enhanced information system, an increasing share of interactions originates from automated agents. If these technical identities are not inventoried, governed, or monitored with the same rigor as human accounts, the Zero Trust architecture loses part of its effectiveness. Securing AI agents thus becomes a central element of any modern security strategy.

AI agents, the new identities of the IT system

AI architectures rely on multiple automated access mechanisms. To call a model, query a database, trigger an action in a business tool, or fetch information via an API, an AI agent must possess a technical identity. Identities can take the form of API keys granting access to services or databases, service accounts used to run automated workflows, OAuth tokens authorizing access to SaaS applications, or machine-to-machine identities in cloud-native architectures.
These mechanisms are essential to the agents’ operation. However, they also introduce a paradigm shift: interactions with the IT system that were predominantly triggered by human users are increasingly orchestrated by autonomous processes. An agent capable of analyzing a support ticket, consulting a knowledge base, and triggering an action acts as a system user, often at a scale and speed well beyond human capabilities.

The blind spot of Zero Trust

The Zero Trust model rests on several pillars: strong authentication, access segmentation, continuous verification, and the principle of least privilege. These mechanisms are now well understood for human users. Non-human identities, however, remain frequently under tighter control. In many organizations, API keys and service accounts are created to meet a project need, then remain active without real oversight. They sometimes carry extended privileges to simplify technical integrations.
This phenomenon is amplified by the speed of experimentation around AI. Data, innovation, or business teams deploy agents to automate tasks, connect models, or enrich applications, sometimes outside traditional governance processes. Result: parts of the IT system operate with automated access that escapes the usual controls.
For an attacker, these identities are an attractive target. Unlike user accounts, they are typically not protected by multifactor authentication and may possess extended authorizations. An API key exposed in a code repository or a compromised token can thus grant access to critical resources. In a Zero Trust architecture, these identities outside the control boundary create an implicit trust zone—precisely what this model seeks to avoid.

Excessive Privileges and “Shadow Agents”: Watch Out for the Danger!

One of the major risks associated with AI agents concerns privilege management. To simplify development, it is common to grant an agent excessive rights: full database access or broad permissions across several APIs.
Contrary to the very principle of least privilege. An agent should have only the rights necessary for its function. Yet this granularity is still rarely applied.
Add to that the emergence of “Shadow Agents.” Similar to Shadow IT, agents can be informally created by business or technical teams to automate certain tasks. They often use quickly generated technical identities, without centralized registration.
Over time, these identities become hard to trace. Some persist even after the initial project has disappeared, while others retain unjustified high privileges. In environments where AI agents interact with multiple systems (databases, SaaS, or internal tools), these ghost identities constitute a significant risk vector.

Mapping and Governing AI Agents

To effectively secure an automated IT system, it is essential to make AI agent identities visible and controllable. This means inventorying all present agents, whether officially deployed by IT or arising from locally driven initiatives not governed. Each technical identity (API key, service account, OAuth token, or machine-to-machine identity) should be inventoried and associated with a human owner to ensure traceability and accountability.
Beyond their mere presence, it is crucial to control the resources these agents can connect to. Centralized management of tools, applications, APIs, and databases enables applying the least privilege principle to every interaction, continuously monitoring access, and immediately detecting any abnormal activity. Technical identities must be revocable or updatable swiftly to limit risks arising from abuse or malfunction.
Finally, the governance of agents should cover the entire lifecycle: creation, privilege assignment, activity tracking, secure credential renewal, and removal when no longer needed. Treating AI agents as fully fledged identities ensures complete visibility, systematic control, and coherent integration into the overall security strategy.

Towards a Truly Extended Zero Trust

The Zero Trust principle (Never Trust, Always Verify) must also apply to automated entities. This implies several evolutions in security strategies to authentically verify machine-to-machine identities, apply the principle of least privilege to AI agents, monitor their behaviors to detect potential abnormal uses, or integrate these identities into segmentation and access-control policies. As AI architectures grow more complex, controlling these identities becomes a foundational element of the security posture.
Artificial intelligence no longer merely analyzes or recommends: it acts directly within information systems. In an increasingly automated IT environment, trust must no longer be verified only for users, but for every agent capable of acting on their behalf.
*Aziz S. Mohammed is Senior Manager, Solutions Engineering at OKTA
Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.