OpenAI Accuses DeepSeek of Copying Its LLMs

OpenAI sounds the alarm to Congress. In a memorandum addressed this Thursday to the U.S. House of Representatives’ Special Committee on China, it accuses its Chinese rival DeepSeek of unfairly exploiting American AI models to train its own technology.

According to the document seen by Bloomberg and Reuters, DeepSeek would rely on “distillation,” a technique that uses the outputs of an established AI model to train a new competing model. OpenAI says it has detected “new masked methods” designed to bypass its safeguards against misuse of its systems.

These practices, largely tied to China and occasionally to Russia according to OpenAI, persist and become more sophisticated despite efforts to clamp down on users who violate the terms of use.

DeepSeek Allegedly Uses Distillation

The inventor of ChatGPT says that accounts linked to DeepSeek employees have developed ways to bypass access restrictions by routing through third-party routers that disguise their origin. Lines of code were also created to access the US models and extract their results “programmatically.”

Read also: Why Peter Steinberger leaves OpenClaw for OpenAI

Late September 2025, in a Nature article, a host of authors billed as the “DeepSeek-AI Team” revealed that they had spent $294,000 training their R1 model. A figure well below the numbers reported for its American rivals. They noted that this model-centric training had been conducted over a total of 80 hours on a cluster of 512 H800 chips, after a preparatory phase using A100 chips for experiments on a smaller model.

By comparison, Sam Altman, CEO of OpenAI, stated in 2023 that training foundational models had cost “well over” $100 million — but without providing detailed figures for any of its releases.

A Commercial and Security Threat

This situation presents a double threat. Economically first: DeepSeek and many Chinese models being offered for free, distillation poses a major commercial risk for companies like OpenAI and Anthropic, which have invested billions in their infrastructures and charge for their premium services.

On the security front: OpenAI notes that DeepSeek’s chatbot censors results on topics sensitive to Beijing, such as Taiwan or the Tiananmen Square events. When capabilities are copied via distillation, the guardrails often disappear, enabling potentially dangerous AI use in high-risk areas such as biology or chemistry.

David Sacks, White House AI adviser, had already warned about these tactics last year, saying that DeepSeek “extracts more juice” from older chips while distilling the knowledge from OpenAI’s models.

The Semiconductor Question

Washington’s concerns also center on access to advanced AI chips. At the end of 2024, President Trump relaxed the restrictions, allowing Nvidia to sell its H200 processors to China — chips that were roughly 18 months behind the latest Blackwell versions.

Documents obtained by the China committee reveal that Nvidia provided technical support to help DeepSeek improve and co-design its R1 model. The base DeepSeek-V3 model reportedly would have required only 2.8 million GPU hours on H800 chips for full training — processors that could be sold to China for a few months in 2023.

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.