Nvidia has spent the opening stretch of GTC 2026 stacking up partnership announcements across chips, cloud, robotics, telecom, autos, factory software, space, and graphics.
The event started on Monday and runs for three days.
Speaking to a packed crowd, Nvidia CEO Jensen Huang said he expects purchase orders tied to Blackwell and Vera Rubin to reach $1 trillion through 2027, which is far above the $500 billion revenue opportunity the company had talked about last year for those two chip platforms.
Jensen also once again vowed that Nvidia’s demand is strong from both startups and large companies. Nvidia shares rose about 2% on Monday, though the stock was back in the red at press time on Tuesday.
The bigger story at the conference, though, has been the deal flow. By Cryptopolitan’s count, the company has already rolled out well over 10 partnerships, with the total landing at roughly 15-17.
One of the more unusual announcements at GTC 2026 focused on space. Nvidia said its newest accelerated computing platforms are being used to push AI compute into orbital data centers, geospatial intelligence, and autonomous space operations.
The company said it is targeting environments where size, weight, and power are tight, but demand for serious computing power is still rising.
That is where the new NVIDIA Space-1 Vera Rubin Module comes in. Nvidia said the module is the latest part of its accelerated platform for space. It also said the Rubin GPU on the module can deliver up to 25 times more AI compute for space-based inferencing than the NVIDIA H100 GPU.
The company tied that increase to next-generation compute for orbital data centers, more advanced geospatial intelligence processing, and autonomous space operations. It also said the platform is designed to let AI applications work from ground to space and from space to space as mission needs become more complex.
Nvidia said it is working with robot brain developers, industrial robot makers, and humanoid robot companies to push production-scale physical AI.
At GTC, Jensen introduced new NVIDIA Isaac simulation frameworks and new NVIDIA Cosmos and NVIDIA Isaac GR00T open models.
The partner list on that front is long, as Nvidia named ABB Robotics, AGIBOT, Agility, FANUC, Figure, Hexagon Robotics, KUKA, Skild AI, Universal Robots, World Labs, and YASKAWA as industry leaders building on its platform. Jensen said:
Nvidia also announced a separate industrial software push. It said it is working with Cadence, Dassault Systèmes, PTC, Siemens, and Synopsys to bring NVIDIA CUDA-X, NVIDIA Omniverse, and GPU-accelerated industrial tools to FANUC, HD Hyundai, Honda, JLR, KION, Mercedes-Benz, MediaTek, PepsiCo, Samsung, SK hynix, and TSMC.
The company said those tools will be used to speed up design, engineering, and manufacturing.
That same group of software firms is also rolling out Nvidia-powered agentic AI tools for customers preparing for the next stage of the AI market.
Nvidia said those solutions are running on its AI infrastructure across Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, as well as through original equipment manufacturers Dell, HPE, and Supermicro for faster design and simulation work.
The auto side of the event was busy too. Nvidia said it has expanded its work with Hyundai Motor Company and Kia Corporation on next-generation autonomous driving technology built on the NVIDIA DRIVE Hyperion platform.
The company said the collaboration combines Hyundai’s software-defined vehicle capabilities, global fleet, and autonomous driving development work with Nvidia’s accelerated computing, AI infrastructure, and self-driving software.
As part of the expanded relationship, Hyundai plans to integrate Nvidia autonomous driving technology into selected vehicles, supporting Level 2 and above systems. The company said that work is meant to improve safety and add smarter driving functions across Hyundai vehicle platforms.
Nvidia also said adoption of DRIVE Hyperion is growing with global automakers BYD, Geely, Isuzu, and Nissan, along with mobility providers.
Another partnership was on the new NVIDIA Physical AI Data Factory Blueprint, which Nvidia described as an open reference architecture that unifies and automates how training data is generated, expanded, and evaluated.
For that effort, the company said it is working with Microsoft Azure and Nebius to integrate the blueprint into cloud infrastructure and services.
Nvidia also launched NVIDIA Dynamo 1.0, open-source software for generative and agentic inference at scale. Paired with the NVIDIA Blackwell platform, the company said the software helps cloud providers, AI companies, and enterprises deliver inference with more speed, scale, and efficiency.
The company said the problem now is not just raw computing power. It is orchestration inside the data center, where requests come in at different sizes, in different formats, and in unpredictable bursts.
That is why Nvidia described Dynamo 1.0 as the distributed operating system of AI factories. The software is built to coordinate GPU and memory resources across a cluster for more complex AI workloads. In industry benchmarks, Nvidia said Dynamo improved inference performance on NVIDIA Blackwell GPUs by up to 7 times.
Telecom was another area on the list. Nvidia and T-Mobile said they are working with Nokia and a growing ecosystem of developers to bring physical AI applications over distributed edge AI networks.
The companies said next-generation AI-RAN infrastructure can turn wireless networks into a platform for distributed high-performance edge AI computing.
They tied that to developers building vision AI agents that can understand the physical world across cities, utilities, and industrial work sites through the NVIDIA Metropolis platform.
T-Mobile said it was the first in the United States to pilot Nvidia’s AI-RAN infrastructure using Nokia’s anyRAN software. The carrier is now working with selected Nvidia physical AI partners while continuing to deliver advanced 5G connectivity.
The company also launched the NVIDIA Vera CPU, which it called the first processor built for agentic AI and reinforcement learning. Nvidia said Vera delivers twice the efficiency and runs 50% faster than traditional rack-scale CPUs.
The chip builds on the earlier NVIDIA Grace CPU and is aimed at organizations building AI factories for coding assistants, enterprise agents, consumer agents, and other large-scale AI services. Nvidia said Vera brings the highest single-thread performance and bandwidth per core.
The companies working with Nvidia to deploy Vera include hyperscalers Alibaba, CoreWeave, Meta, and Oracle Cloud Infrastructure, along with system makers Dell Technologies, HPE, Lenovo, Supermicro, and others.
If you're reading this, you’re already ahead. Stay there with our newsletter.


