The European Commission has taken a step forward in its push to oversee the most powerful artificial intelligence models.
OpenAI will grant the European Union access to GPT-5.5-Cyber, a specialized variant of its latest model designed to address cybersecurity challenges. In contrast, Anthropic has been slow to open the same doors to Brussels.
The GPT-5.5-Cyber model was rolled out last week in a limited early-access phase, reserved for pre-screened cybersecurity teams. That circle is now being widened to European partners: businesses, governments, cyber authorities and EU institutions, including the European AI Office.
OpenAI Embraces Transparency
The Commission’s spokesperson, Thomas Regnier, praised the move at a briefing. “We welcome OpenAI’s transparency and its intention to grant the Commission access to the new model,” he said, confirming that an exchange had already taken place and that further discussions were planned within the week. “This will allow us to monitor the model’s deployment very closely and to address security concerns,” he added.
On the OpenAI side, the stance is decidedly cooperative. George Osborne, head of the OpenAI for Countries program, emphasized the need for shared responsibility: “AI labs like ours should not be the sole arbiters of cybersecurity, because resilience depends on trusted partners working together.” He also announced an “EU Cyber Action Plan” aimed at collaborating with European policymakers, institutions and businesses.
Anthropic Falls Behind
The contrast with Anthropic is striking. A month after the debut of Claude Mythos, the Commission still has not obtained preliminary access to review it.
Regnier acknowledged that discussions were ongoing with Anthropic, but stressed the difference in the maturity of the negotiations. “With one (OpenAI), you have a company proactively offering to grant access. With the other (Anthropic), we have good discussions, but we’re not at a stage where we can speculate on potential access,” he summarized.
The Commission is said to have held four to five meetings with Anthropic, without the question of access to the models having truly been put on the table.