NVIDIA announces Vera Rubin POD featuring 1,152 GPUs across 40 racks, delivering 60 exaflops and 10x better inference performance per watt than Blackwell. (ReadNVIDIA announces Vera Rubin POD featuring 1,152 GPUs across 40 racks, delivering 60 exaflops and 10x better inference performance per watt than Blackwell. (Read

NVIDIA Unveils Vera Rubin POD 40-Rack AI Supercomputer for Agentic Workloads

2026/03/17 03:48
3분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

NVIDIA Unveils Vera Rubin POD 40-Rack AI Supercomputer for Agentic Workloads

Iris Coleman Mar 16, 2026 19:48

NVIDIA announces Vera Rubin POD featuring 1,152 GPUs across 40 racks, delivering 60 exaflops and 10x better inference performance per watt than Blackwell.

NVIDIA Unveils Vera Rubin POD 40-Rack AI Supercomputer for Agentic Workloads

NVIDIA just dropped the specs on its most ambitious AI infrastructure play yet. The Vera Rubin POD packs 1,152 Rubin GPUs across 40 racks, delivering 60 exaflops of compute power and 10 petabytes per second of total scale-up bandwidth. Production units ship in the second half of 2026.

The numbers here are staggering: 1.2 quadrillion transistors, nearly 20,000 NVIDIA dies, all engineered to function as a single coherent supercomputer. NVIDIA claims 4x better training performance and 10x better inference performance per watt compared to its current Blackwell architecture—with token costs dropping to one-tenth of current levels.

Five Purpose-Built Rack Systems

The POD combines five distinct rack-scale systems, each targeting specific bottlenecks in modern AI workloads:

Vera Rubin NVL72 serves as the core compute engine. Each rack integrates 72 Rubin GPUs and 36 Vera CPUs connected through NVLink 6, which pushes 3.6 TB/s bandwidth per GPU—more total bandwidth than the entire global internet, according to NVIDIA. The system targets all four AI scaling laws: pretraining, post-training, test-time scaling, and agentic scaling.

Groq 3 LPX racks tackle the latency problem. With 256 language processing units per rack using SRAM-only architecture, these pair with NVL72 to deliver what NVIDIA claims is 35x more tokens and 10x more revenue opportunity for trillion-parameter models versus Blackwell.

Vera CPU racks provide sandbox environments for agent testing. A single rack sustains over 22,500 concurrent reinforcement learning environments—critical for validating agentic AI outputs before deployment.

BlueField-4 STX racks introduce what NVIDIA calls "AI-native storage" through the CMX context memory platform. By offloading KV cache to dedicated high-bandwidth storage, the system claims 5x higher tokens-per-second and 5x better power efficiency than traditional approaches.

Spectrum-6 SPX networking racks tie everything together with 102.4 Tb/s switches featuring co-packaged optics.

The Token Economics Argument

NVIDIA frames this around a specific market reality: token consumption now exceeds 10 quadrillion annually, and the shift from human-AI to AI-AI interactions will accelerate that growth dramatically. Modern agentic systems generate massive reasoning token volumes while expanding KV cache requirements—exactly the bottleneck this architecture targets.

Third-party SemiAnalysis InferenceMax benchmarks cited by NVIDIA show current Blackwell systems already deliver 50x better performance per watt and 35x lower cost per token compared to H200. Vera Rubin aims to extend that lead.

Thermal and Power Engineering

The third-generation MGX rack architecture introduces Intelligent Power Smoothing with 6x more rack-level energy storage (400 joules per GPU) than previous generations. This reduces peak current demands by up to 25% and eliminates the need for massive battery packs.

All racks operate at 45°C warm-water inlet temperatures, enabling data centers in many climates to use ambient air cooling. NVIDIA claims this frees enough power to add 10% more racks in the same facility power budget.

Looking Ahead

Beyond the initial POD configuration, NVIDIA previewed Vera Rubin Ultra NVL576 scaling to 576 GPUs across eight racks, and the next-generation Kyber architecture targeting NVL1152 with 144 GPUs per rack. The roadmap suggests NVIDIA sees multi-rack NVLink domains as the future of AI infrastructure—not just bigger GPUs, but fundamentally different system architectures.

For enterprises planning AI infrastructure investments, the message is clear: the economics of AI compute are shifting from chip-level to facility-level optimization. Those building out data centers now face a choice between current-generation systems and waiting for Vera Rubin availability in late 2026.

Image source: Shutterstock
  • nvidia
  • ai infrastructure
  • vera rubin
  • data centers
  • enterprise ai
시장 기회
D. Energy 로고
D. Energy 가격(WATT)
$0.19945
$0.19945$0.19945
+0.17%
USD
D. Energy (WATT) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!