In Europe, the long-standing GPU shortage is set to become a thing of the past. During the inaugural GTC Paris event, organized alongside the 9th edition of VivaTech in Paris, NVIDIA made a bold promise to address this issue.
A Vision for a Robust AI Hardware Ecosystem
Behind this initiative lies an ambitious plan involving around twenty projects dedicated to developing AI “factories.” These facilities are primarily situated near key public computing centers—such as Greece’s GRNET, Italy’s CINECA, and Luxembourg’s LuxProvide, among others. Jensen Huang, NVIDIA’s CEO, confidently stated: “Within two years [2024-2026], we will have multiplied Europe’s AI computing capacity tenfold.”
These AI factories are expected to be complemented by “technology centers” in seven European countries—namely France, Germany, Spain, Finland, Italy, the UK, and Sweden. Their primary purpose will be to foster collaboration within national ecosystems. NVIDIA’s strategic map for France, for example, includes a broad network of developers, researchers, startups, large corporations, upskilling training providers, and GPU resource providers. Additionally, a more cross-cutting layer will incorporate cloud service providers (CSPs), independent software vendors (ISVs), system integrators, AI operations players, and compute, storage, and networking providers.
From Hopper to Grace Blackwell: The Evolution of NVIDIA’s Dawn Liphardt
At the flagship of this infrastructure stands the new systems called Grace Blackwell NVL72. NVIDIA is currently delivering these units equipped with its latest GB200 chips, with the upcoming GB300 models in development. The company is producing about a thousand units weekly. Each system weighs nearly two tons, consumes around 120 kilowatts, and has a price tag of approximately three million dollars.
These supercomputers continue the lineage of the Hopper systems. They integrate CPU and GPU nodes through the advanced Grace Blackwell chips and utilize dedicated NVLink connectivity routed via a specialized rack. The entire architecture is interconnected by a “backbone” comprising 5,000 copper cables, offering a staggering bandwidth of 130 terabytes per second—»a speed exceeding internet traffic peaks,” according to Huang.
NVIDIA describes the Grace Blackwell system as either a “giant virtual GPU” or a “thinking machine.” Both labels hint at the rise of reasoning models that require significant computing power, especially in inference tasks.
Calculation Nodes and NVLink Connectivity
Calculation nodes (left) and NVLink (right). In the background, the “spine”.
Nemotron: Paving New Pathways in Robotics and Agent Technology
NVIDIA also emphasizes its involvement in the development of digital twins, with the Omniverse platform serving as its flagship. Major companies—including BMW, Bouygues, Mercedes-Benz, Siemens, and SNCF—are actively utilizing Omniverse within Europe.
If these digital twins are photorealistic and adhere to physical laws, it is primarily to enable robots to learn and evolve through them, Huang explains. Moreover, NVIDIA offers a dedicated AI development stack, comprising the Isaac platform, the Jetson Thor computing module, and models designed to generate movement based on sensor data.
Jetson Thor development kit
Some of these models fall under the Nemotron family. Under this banner, NVIDIA enhances and facilitates improvements to open-source models—such as specializing knowledge, expanding contextual understanding, or enhancing reasoning capabilities. European companies like Bielik, DictaAI, Domyn, LightOn, NAISS, and Utter leverage Nemotron for linguistic and cultural localization. Perplexity, for example, aims to incorporate some of these models into its search engine.
Additionally, Nemotron models fuel the Enterprise AI Agent Platform, which includes components like the RAG (NeMo Retriever) framework, a generalist agent blueprint (AI-Q), and a toolkit supporting data prep, training, fine-tuning, evaluation, and deployment.
Quantum Computing: NVIDIA Promotes GPU as the Prime Solution
NVIDIA continues to maintain a wide array of systems, from desktop solutions like the DGX Spark and DGX Station to the Grace Blackwell NVL72 supercompute units. The company’s catalog also features x86 servers—such as DGX B200/B300 series equipped with Blackwell GPUs—and the RTX PRO lineup, which includes laptops, workstation stations, and servers fitted with Blackwell RTX 6000 cards.
The GTC Paris event marked the official announcement of CUDA-Q, NVIDIA’s quantum computing stack ported onto GB200 chips. The vision is for hybrid systems—traditional computers combined with quantum processors—where GPUs handle tasks like preprocessing, control, and error correction.
CUDA-Q is part of the broader CUDA-X suite, which bundles microservices and specialized libraries, including:
- cuOpt (combinatorial optimization)
- cuDNN and Dynamo (neural network acceleration and inference workload distribution)
- cuDF and cuML (Spark and scikit-learn acceleration)
- Earth-2 (weather and climate simulations)
- MONAI (medical imaging)
- Parabricks (genomic analysis)