AI & Financial Markets

Nvidia H200 GPU Sells Out Within Hours as AI Demand Hits New Record

Advertisement

Nvidia’s newly launched H200 GPU — its most powerful AI training chip to date — sold out within hours of becoming available to cloud providers and enterprise customers, underscoring the insatiable demand for AI computing infrastructure that has driven Nvidia’s stock price up over 600% in the past two years.

The H200’s Capabilities

The H200 delivers approximately 60% more memory bandwidth than its predecessor, the H100, and features 141 gigabytes of HBM3e memory — critical for running the massive parameter counts of next-generation AI models. For large language model inference, the performance improvement translates to roughly 2x faster token generation, which directly reduces the cost of running AI services at scale.

Nvidia claims the H200 can train a GPT-4 scale model 30% faster than the H100 at the same power budget, a meaningful efficiency gain that represents significant savings for hyperscalers training the next generation of models.

Who Got Allocation

Microsoft Azure, Google Cloud, and Amazon Web Services each received the largest initial allocations, consistent with their status as Nvidia’s top customers. CoreWeave and Lambda Labs — GPU cloud providers focused specifically on AI workloads — also secured significant quantities. Several AI startups reportedly placed orders months ago and are still waiting for their allocations.

Market Impact

Nvidia’s stock climbed 4.2% on the day of the H200 announcement, adding approximately $90 billion in market capitalization in a single session. The company’s valuation has now surpassed $2 trillion, placing it alongside Apple and Microsoft in the exclusive club of multi-trillion-dollar companies. Analysts at Goldman Sachs raised their price target to $1,400, citing a supply-demand imbalance that shows no sign of easing through 2026.

Supply Constraints Continue

Despite Nvidia’s efforts to ramp production with manufacturing partner TSMC, supply remains the binding constraint on AI infrastructure build-out globally. Jensen Huang, Nvidia’s CEO, acknowledged the shortage at a recent investor conference, saying the company is “working as fast as we can” with TSMC to increase capacity. TSMC has announced plans to add dedicated CoWoS advanced packaging capacity, the bottleneck in HBM chip production, but significant new supply won’t come online until late 2025.

AI
AI Ground News Editorial Team
AI News Staff

Our editorial team monitors 10+ trusted AI and technology publications daily to bring you accurate, timely coverage of the rapidly evolving artificial intelligence industry.

Advertisement