Gartner was forecasting a market that could rise from $5.1 billion in 2024 to $47.1 billion by 2030… Already, Visa and Mastercard are offering agentic commerce solutions designed to empower agents to complete online purchases…
This new approach raises obvious cybersecurity questions. If an attacker were to gain control of an autonomous agent, they could destabilize a company’s internal operations and, in the case of a B2C-oriented agent, access customers’ personal data or drain their bank accounts.
At In Cyber 2025, Proofpoint demonstrated a secured interaction between two agents using its solution. Xavier Daspre, France Technical Director at Proofpoint, explains his approach: “Both agents are treated as workstations that connect via email exchange or, more importantly, via API to public cloud services. For us, the approach remains the same. For now, the agents’ behavior is more structured and far easier to discern, but that will evolve. In the current use cases, our solutions are already ready to protect this somewhat unusual scenario.”
The dark side of agents
Anti-DDoS service providers have been grappling with bots for years. They develop algorithms and train machine learning models to separate human-generated traffic from legitimate bots and illicit bots alike.
For Sébastien Talha, Regional Sales Director at Human Security, agents are already heavily exploited by attackers: “80% of attacks today rely on bots, because attackers need to operate at scale,” the executive explains. “Human intervention only occurs at the end of an attack, when the attacker must perform complex operations. One can imagine that with agentic AI, this will disappear.”
In the face of AI-powered bots, defenses that measure keyboard speed, mouse movements, or navigation patterns to determine whether the user is human or a bot will no longer suffice. “The attacker can simulate typing speed, record mouse movements, and replay them automatically.”
Human Security has built more than 350 machine-learning models to thwart bot-based attacks, and its sensor collects over 2,500 technical parameters about the user, covering their network, their device, and their behavior. It will need to adapt its approach to address the arrival of legitimate agentic AIs.
MCP, a pillar of security
France’s rival DataDome emphasizes behavioral analysis as a key tool to detect fraud during a session, supplementing technical parameters such as IP address, geolocation, and device type. “In behavioral aspects, we analyze mouse movements, whether the behavior, requests, and navigation path in the session align with the user’s usual behavior on the site or app,” explains Benjamin Barrier, Chief Strategic Officer and cofounder of DataDome.
“Behavioral analysis will allow us to detect illegitimate AIs and legitimate agentic AIs with a mainstream footprint; operators like OpenAI are deploying protocols such as MCP to enable robust authentication of agents. It’s the combination of these two approaches that will deliver effective protection for these agentic AIs.”
The vendor has already begun cataloging the operator entities of agentic AIs that have a visible market presence, and is working on the MCP (Model Context Protocol) to secure exchanges. This protocol is set to become increasingly important in securing agentic AIs because it allows interaction with the agent, passing parameters, whether from an application to an LLM or from agent to agent.
Best practices for MCP recommend using TLS for remote connections, validating all incoming messages, and protecting resources, notably through access control and strict error handling.