Custom Security for AI Infrastructure

The report titled “The State of Generative AI 2025,” edited by Palo Alto Networks, demonstrates a dramatic surge in generative AI use cases in 2024. Traffic to these services jumped by 890% in 2024, and large organizations typically operate an average of 66 generative AI applications, with about 10% of them deemed high risk.

Whether it’s translation services, document summarization, or also chatbots, search engines, and developer-focused tools, these AIs are now being adopted across all industry sectors.

Pierre Jacob, Director General of Magellan Sécurité.

Securing these infrastructures comes with some particularities. AI systems remain standard IT workloads with software containers that require protection, but LLMs carry vulnerabilities intrinsic to machine learning. “For companies that want to train their models, it is extraordinarily important to secure the model supply chain,” explains Pierre Jacob, Director General of Magellan Sécurité.

Read also: How Shadow AI Is Driving Data Leakage Risks

The consultant notes that it is relatively easy to contaminate a model and introduce significant biases in its behavior: “Only a fairly small percentage of data can derail a model. It is therefore crucial to meticulously secure training infrastructures.”

Cybersecurity Enters NVIDIA Infrastructures

NVIDIA has fully acknowledged the risks facing AI trained and run on its infrastructures. The California-based company has embedded confidential computing features into its Hopper and Blackwell architectures, enabling end-to-end encrypted data training for AI. Likewise, security solution providers are urged to deploy their security building blocks on AI infrastructures.

Earlier in the summer, CrowdStrike announced the integration of its Falcon Cloud Security platform with Nvidia’s NIM LLM microservices, as well as with NeMo, its AI development platform. We see the same intent to align with NVIDIA at Check Point.

Adrien Merveille, CTO France of Check Point Software

“We have signed a partnership with Nvidia to come directly onto the GPUs that will power the training of AI engines,” declares Adrien Merveille, CTO France of Check Point Software. “This will enable security rules to be applied to segment training data, control administrator access, and manage memory handling to prevent attacks like Prompt Injection.”

Similarly, the vendor has integrated protection from the Top 10 WASP LLM into its Web Application Firewall to shield AI systems against known attack vectors. This ranking lists the ten most common attack types on LLMs, from Prompt Injection and Data Poisoning to model theft and vulnerabilities in the data supply chain during training or production.

Eric Vedel, Cisco’s CISO, reminds us that even LLMs downloaded from Hugging Face may have been tampered with and must be vetted before deployment. Cisco is advancing its Cisco AI Defence to detect vulnerabilities in models. Officially launched on January 15, 2025, it stems from the acquisition of Robust Intelligence a few months earlier.

Éric Vedel, Cisco’s CISO

“This vendor had already deployed its protections with very large clients to combat Shadow AI by increasing visibility into internal AI usage, detecting vulnerabilities in the models in use, and implementing safeguards and countermeasures against these AI-related risks. A unique move in the market, we have incorporated this offering into our SSE (Secure Access Service Edge) portfolio.”
This solution sits within the wave of AI SPM (Security and Privacy Management) offerings appearing to secure models and data.

Read also: Palo Alto Networks Acquires Chronosphere for $3.35 billion

Palo Alto Networks has recently placed itself on this market with a comprehensive platform dedicated entirely to AI, addressing all risks catalogued by OWASP: “To cover all of these risks, we chose to create a new platform, Prisma AIRS,” explains Eric Antibi, Chief Technology Officer at Palo Alto Networks. “This brings a full set of solutions designed for the security of these complex architectures and the specific risks facing GenAI.”

Eric Antibi, Chief Technology Officer of Palo Alto Networks.

The suite includes a Model Scanning module to identify vulnerabilities in models, a Posture Management module to spot configuration issues in the architecture, and a Red Teaming module that continuously tests models to ensure new vulnerabilities do not appear after updates, for instance.

Finally, modules ensure runtime security for AI as well as for intelligent agents. “Prisma AIRS is a standalone platform, nevertheless the network component plays a crucial role in the security of these infrastructures, particularly for monitoring exchanges between datasets and LLM engines. Consequently, the administration console of our Network Security platform is used, but these remain separate modules.”

While AI SPM solutions are still relatively new and not yet widely adopted, security and AI teams must adopt them, mature, and advance their security policies for AI to a higher level.


Pierre Jacob, Director General of Magellan Sécurité: “Don’t cling to one single stance.”

“You need to tailor your LLM choices to the uses and the risks. It is possible to deploy an LLM or a Small Language Model on a workstation if the use case requires offline operation. Apple devices are quite well suited for local SLM deployments, for example. Similarly, you should not reject an LLM just because it lives in the public cloud. You need to maintain an architectural vision, design security by design, and be able to switch between models, setting up microservice-based architectures that can consume your models without becoming dependent on them.”

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.