Nvidia disclosed several large-scale sovereign and industry AI projects on its earnings call, including an agreement tied to Saudi Arabia for 400,000-600,000 GPUs over three years and “AI factory and infrastructure projects amounting to an aggregate of 5 million GPUs” announced this quarter.
The company said these commitments gave it “visibility to a half a trillion dollars in Blackwell and Rubin revenue” from early 2025 through the end of 2026.
Sovereign and industry buildouts expand demand
Colette Kress, Nvidia’s CFO, said the Saudi agreement is incremental to previously disclosed orders.
Jensen Huang detailed additional large projects. He said Anthropic is “adopting Nvidia” and committed “up to one gigawatt” of compute built with Grace Blackwell and Vera Rubin systems.
He also said Nvidia is working with OpenAI on a plan to build and deploy “at least 10 gigawatts of AI data centers.”
Nvidia listed other deployments in pharmaceuticals, manufacturing and robotics. Huang cited Lilly’s drug-discovery data center, xAI’s Colossus 2 site and a new AWS-Humane partnership involving up to 150,000 AI accelerators.
He said most industries “haven’t really engaged agentic AI yet, and they’re about to,” and added that “each country will find their own infrastructure.”
The company described these projects alongside demand from hyperscalers. Nvidia said its GPU installed base, including Blackwell, Hopper and Ampere, is “fully utilized,” and that “the clouds are sold out.”
Data-center revenue reached $51.2 billion in the quarter. Total revenue was $57 billion, up 62% year over year. Nvidia guided to about $65 billion for the next quarter.
Revenue visibility tied to multi-gigawatt deployments
Nvidia provided internal per-gigawatt revenue assumptions for the first time.
Huang said earlier-generation transitions were roughly $20-25 billion per gigawatt, Grace Blackwell is “30,” plus or minus, and that Rubin will be higher.
These figures frame how hyperscaler, sovereign and enterprise infrastructure plans connect to the company’s $500 billion outlook.
Blackwell contributed heavily to results. Huang said GB300 “crossed over GB200” and represented “roughly two thirds of the total Blackwell revenue” in the quarter.
Nvidia said generational improvements in performance per watt and its software stack extend the useful life of older GPUs and increase customer throughput.
The company emphasized that one architecture serves pre-training, post-training and inference and said inference compute has grown as large-context models expand.
Nvidia reported growth in networking. The company cited $8.2 billion in networking revenue driven by NVLink, InfiniBand and Spectrum-X Ethernet systems.
Huang said large AI factories at Meta, Microsoft, Oracle and xAI use Spectrum-X and that NVLink Fusion is planned for future systems with partners including Intel and Ampere Computing.
Constraints and risks remain in the buildout
Nvidia said it is not assuming any data-center compute revenue from China in its outlook.
Huang said H200 sales in China were “approximately $50 million” and that “sizable purchase orders never materialized” because of geopolitical limits and local competition.
Kress said the company is “committed to continued engagement” with U.S. and Chinese authorities but is forecasting without China shipments for now.
The company also highlighted near-term operational requirements. Inventory increased 32% quarter over quarter, and supply commitments rose 63%.
Nvidia said it is preparing for significant growth and that its supply-chain partners have long-range visibility into demand.
The company maintained gross-margin guidance in the mid-70s despite higher input costs.
Huang described power as a constraint for every large installation. He said any site “still only has one gigawatt of power” and tied Nvidia’s competitiveness to performance per watt and full-stack optimization.
He said Nvidia’s architecture is designed to extract the most output from a fixed power envelope and argued that this drives revenue per gigawatt across generations.
Nvidia said it will continue stock buybacks and selective investments tied to its software ecosystem and model-builder partnerships.
The company reiterated its plan to ramp the Rubin platform in the second half of 2026.
“We’re in every cloud…we’re in every computer. One architecture. Things just work,” Jensen Huang said.








