It’s confirmed: cybersecurity has become a central arena of competition among the AI giants.
A week after the launch of Claude Mythos, OpenAI responds with GPT-5.4-Cyber, a refined variant of GPT-5.4.
Both products share the same stated objective: to bolster security teams against growing threats, but they diverge radically on how to achieve it.
Two models, two approaches
Anthropic opts for control. Claude Mythos is built on a new frontier architecture, remains non-public, and its deployment is deliberately limited to a tight circle of strategic partners. The model is said to have already identified thousands of major vulnerabilities in operating systems, web browsers, and other critical software.
OpenAI, by contrast, argues for democratization. GPT-5.4-Cyber is a fine-tuning of GPT-5.4, trained on specialized corpora including public threat sources, anonymized incident logs, and MITRE ATT&CK benchmarks.
Access is conditioned on identity verification via the Trusted Access for Cyber (TAC) program, launched in February, which the scale-up now extends to thousands of certified individual defenders and hundreds of teams. Progressive verification levels unlock increasing capabilities, with users approved at the highest level gaining access to GPT-5.4-Cyber and its most sensitive features.
“It’s a team sport. We must ensure that every team is capable of securing its systems,” sums up Fouad Matin, a cyber researcher at OpenAI. The inventor of ChatGPT goes further in suggesting that it is “neither practical nor appropriate to decide centrally who has the right to defend.” A pointed jab at the rival’s selective approach.
Capabilities designed for SOC teams and CISOs
Functionally, GPT-5.4-Cyber stands apart from general-purpose conversational models by its native integration into security workflows. It is designed to operate alongside SIEM tools and EDR solutions, and can synthesize complex alerts, generate investigative reports, and recommend responses aligned with an organization’s internal policies.
Its expanded capabilities include evaluating compiled software to identify malware and weaknesses, detecting patterns of malicious behavior, analyzing simulated intrusion attempts, and drafting automated response scripts.
The deliberately “cyber-permissive” design of the model is precisely what justifies the TAC verification framework.
OpenAI also announces native connectors with Azure Security Center, AWS GuardDuty and Google Security Operations (formerly Chronicle), as well as a partnership with Microsoft Security Copilot.
GPT-5.4-Cyber will be available from May 2026 via OpenAI Enterprise and as a secure API intended for managed security solution providers. Early pilots are planned with players in the financial sector, healthcare operators, and European ministries.
Google DeepMind lies in wait, traditional vendors under pressure
This OpenAI-Anthropic bifurcation unfolds under the watchful eye of Google DeepMind, which in 2025 introduced a threat-evaluation framework for cyber underpinnings of advanced AI, focused on evasion, obfuscation, and proactive mitigations, but without a dedicated model comparable to these offerings at this stage.
For traditional vendors like CrowdStrike or Palo Alto Networks, the situation is ambivalent: partners of Anthropic via Project Glasswing are simultaneously exposed to growing competition as these models integrate functions previously reserved for SOC platforms.
The speed at which this race is accelerating also raises governance questions that the industry is only beginning to articulate: how do you frame tools whose dual offensive/defensive nature remains, by design, difficult to control?