Expert Opinion: AI Agents and the Autonomous Workforce

Like other technological advances before it, agentic AI brings its own set of risks, which obliges security teams to work alongside tech and IT leaders to manage a new “workforce” that is as intelligent as it is autonomous, while ensuring security is embedded from deployment and across all activities.

A World of Possibilities

Agentic AI is defined by its ability to act autonomously. The chatbots used in customer service operations can be regarded—despite their flaws—as the earliest AI agents. Yet interacting with a chatbot tends to be more frustrating than useful, a reality that organizations considering deploying agentic AI should take into account. Indeed, rather than rushing to adopt a technology out of fear of missing out, there is significant benefit in planning its deployment carefully and thoughtfully.

This proliferation is inevitable, and as they continue to improve, algorithms will take on ever more processes and workflows within services that demand high efficiency, strong scalability, and data-driven decision-making. Areas such as incident response (IR), network optimization, data analytics and business intelligence (BI), software development, or supply-chain management can benefit from the analytical, organizational, and predictive capabilities of agentic AI. As this technology matures, there is little doubt its role will extend into other major domains such as medical imaging or the generation of diagnoses and personalized treatments in healthcare, or even the discovery of therapies and drugs in the research and pharmaceutical sectors, for instance.

While the transformative potential is substantial, large-scale adoption of agentic AI will not be without bumps in the road. In particular, AI agents will introduce new responsibilities for decision-makers, notably those heading technology and security teams, and will reshape a company’s digital estate, which inevitably raises new cybersecurity risks.

A Power That Carries Significant Responsibilities

Undoubtedly, CIOs, CTOs and CISOs are busy, but the broad deployment of AI agents will change their roles and add to their already substantial responsibilities. Before handing critical tasks to robots, organizations must establish a high degree of trust in their behavior and reliability. Traditionally tasked with managing IT systems and implementing new strategies and technologies, CIOs and CTOs will now be expected to deploy, monitor, and measure the reliability and effectiveness of this new artificial workforce. Similarly, security teams will no longer be charged solely with protecting human workers and traditional infrastructures, but also autonomous AI agents and the new environments in which they operate.

To achieve this, security leaders must have complete visibility at every stage of deploying their AI agents, prevent the emergence of shadow AI during the process, and get involved from the outset so that security becomes an integral part of operations. This approach involves auditing AI‑agent vendors or integrating agentic AI capabilities into their solutions, ensuring a high degree of transparency, and applying strict security standards regarding how data is accessed and used in both their operation and their training.

This transformation also entails creating environments where AI agents can operate safely and preventing any tampering with their algorithms, whether through data or memory poisoning, blocking access to data necessary to function and decide, or any other technique likely to disrupt agents’ operation and cause broader repercussions for businesses and their stakeholders.

Just as with new hires, security teams will need to define access rules for each AI agent to avoid overly broad permissions. An infected agent with excessive privileges can be exploited to access a company’s systems and move freely, disrupt other AI agents it might connect to, access sensitive data, and exfiltrate it. The parallel between AI security and human security extends to behavior monitoring. Hence, security leaders must ensure visibility into the actions and activities of AI agents, and be able to detect any behavior indicating potential compromise.

If these patterns are only at their infancy, there is no doubt that securing AI agents will be complex. In this light, organizations will need to implement strict access controls, continuous monitoring of agent behavior, robust encryption regimes for data consumed and processed, and strict input/output validation to prevent malicious attacks. They will also need to regularly conduct security audits and targeted penetration tests of AI agents and their integrations to identify and remediate vulnerabilities before they can be exploited.

Securing AI agents is no small feat. As such, embedding security from the outset of agentic AI projects will be essential. Without a solid understanding of the mission and inner workings of their AI, security teams will be unable to precisely tune these security and access parameters.

* Julien Fournier is Vice President Southern Europe, atNetskope

Related content

See all Cybersecurity articles

With CyberArk, Palo Alto Eyes a New Acquisition in the […]

By
Clément Bohic

4 min.

SIEM Elastic and Homegrown SOAR: How the University of Grenoble […]

By
Clément Bohic

AWS closes a software supply-chain vulnerability […]

By
Clément Bohic

ToolShell: the situation one week after the fixes

By
Clément Bohic

ToolShell: this SharePoint vulnerability that built up over time

By
Clément Bohic

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.