Expert Opinion: From the Cloud Era to Agentic AI

Every major technological upheaval follows a familiar script: the promise is seductive, adoption accelerates, competitive pressure intensifies, and security always lags behind.

That was the pattern with public cloud. A vast, loosely defined concept that meant different things to different organisations, cloud adoption brought both opportunities and concerns. Established companies often found themselves caught off guard—either exposed by nimbler rivals or surprised by shadow IT initiatives operating outside centralized governance. The result was a mix of fear, ambiguity, and a security posture that didn’t quite know where to land.

Today we’re seeing the same arc with artificial intelligence. But this time, the pace is even faster, the scale is larger, and the stakes are higher. AI isn’t a single technology. It represents a wave-like evolution, and a poor grasp of these waves is now one of the greatest risks facing enterprises.

The Three Waves of AI: Why They Matter for Security

The first AI wave centered on predictive analytics: data lakes, large-scale pattern recognition, and machine learning operating mostly in the background. For many organisations, that adoption happened quietly, without meaningful oversight from boards. From a security standpoint, these systems were essentially a data-protection challenge: ensuring that sensitive information is neither disclosed nor misused.

Also read: SSE: the experience is becoming simpler than the prices

The second wave, generative AI, changed everything. When tools able to produce text, code, and human-like images entered the public arena, AI quickly became a central topic of discussion. Yet this visibility came at a cost. Generative AI was lumped into a single, overarching and overly broad concept of “AI,” obscuring critical differences in risk profiles and security controls. Security teams understandably focused on what was most visible.

But it is the third wave—agentic AI—that is fundamentally altering the threat landscape.

Agentic AI: When Systems Do More Than Just Assist

Agentic AI systems don’t merely analyze or generate content—they take action. They connect directly to business systems, make decisions, and trigger workflows. Increasingly, they operate semi-autonomously, with only limited human oversight. This isn’t a distant fantasy.

Predictive AI and generative AI are, at their core, data-exchange problems. Agentic AI, by contrast, is a problem of behavioral integrity and system integrity. As soon as AI agents are authorized to interact with ERP platforms, financial systems, logistics workflows, or customer environments, the potential footprint of a compromise expands dramatically.

The parallels with early internet evolution are striking. Static websites gave way to dynamic, database-driven apps. SQL injections suddenly became a dominant threat. Automation opened fresh attack vectors. Every architectural shift introduced risks that security teams were not yet equipped to manage. Agentic AI marks a similar inflection point.

The Blind Spot: Internal Control vs. External Reality

What emerges isn’t a lack of investment but misplaced overconfidence.

In effect, organisations feel secure because they’re controlling what happens within their own infrastructure, while neglecting the expanding ecosystem shaped by AI-driven partners, platforms, and supply chains beyond their borders.

Also read: SSE: the markers of market stabilization

The blind spot becomes especially dangerous when agentic AI begins operating beyond the organizational perimeter. Today’s internal AI rapidly becomes tomorrow’s interconnected automation of supply chains. Retail, logistics, and manufacturing sectors are likely to spearhead this transformation, as firms pursue goals of sustainability, just-in-time production, and operation optimization through AI.

As agentic systems start moving work from one organisation to another, the attack surface multiplies. Security failures will no longer be isolated incidents; they will cascade across systems.

Defending Against AI-Driven Threats: A Mindset Shift

Defending against AI-driven threats doesn’t require discarding existing security principles; it requires evolving them. Many of the guardrails needed to secure agentic AI are derived from controls that have proven effective for human users. The difference lies in speed, scale, and the continuous nature of operations.

Nevertheless, AI agents must still be treated as bona fide users from a security perspective, with Zero Trust-inspired controls. This means assigning identities, granting access on the principle of least privilege, enforcing behavioral baselines, and continuously monitoring for anomalies. If an agent begins interacting with systems beyond its defined perimeter, that deviation should be as visible and actionable as a suspicious human action.

Segmentation becomes essential not as a grand architectural ideal but as a practical means to limit the impact of a breach. Without it, a compromised agent can move laterally at machine speed.

And perhaps most importantly, organisations must stop treating AI security as a mere add-on. If they are already struggling with current threats, how will they contend with emerging risks such as agentic AI and quantum computing?

From Reactive Cybersecurity to Resilience-By-Design Cybersecurity

The principal lesson from both cloud adoption and the evolution of AI is this: reactive security does not scale.

Also read: Cybersecurity: from Docaposte to Zscaler, transactions are multiplying

The pace of innovation now routinely outstrips governance, regulation, and procurement cycles. Waiting for regulatory maturity or for incidents to force action is no longer viable. Resilience must be designed from the outset, not tacked on after a disruption occurs.

This implies shifting focus away from point solutions toward architectural agility. Organisations must build security models that can adapt as AI capabilities evolve, rather than buckling at every new development.

AI won’t slow down. Agentic systems will only become more capable, interconnected, and autonomous. Organizations that continue to treat AI security as a marginal or future concern will repeat the mistakes of the cloud era.

This time, however, the consequences will spread faster and farther.

The question isn’t whether AI will reshape the threat landscape. It already has. The real question is whether enterprises are ready to defend themselves before cascading effects reach them.

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.