Nvidia’s Jensen Huang predicts $1tn in AI chip revenue over 2 years
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Nvidia chief executive Jensen Huang on Monday said he expects at least $1tn in AI chip revenue over roughly the next two years, driven by adoption of tools such as Anthropic’s Claude Code and OpenAI’s OpenClaw.
“Right now where I stand . . . I see through 2027 at least $1tn” in revenue, Huang said, adding he was “certain” demand would prove to be higher.
Nvidia shares briefly gained more than 2 per cent after Huang’s comments at the $4.5tn semiconductor giant’s flagship GTC event in Silicon Valley.
But the enthusiasm was shortlived. The stock gave up some ground to end the day lower than before his presentation.
Huang’s bullish forecasts have struggled to impress Wall Street in recent months amid concerns about the returns from the huge investments being poured into AI infrastructure, threats to the semiconductor supply chain from conflict in the Middle East and shortages of the memory chips Nvidia’s products need.
The projection, which was well above Wall Street expectations, is the latest signal that Nvidia is banking on the AI boom continuing to accelerate.
Its role as the largest supplier of advanced AI chips has catapulted it to become the world’s most valuable company.
Huang had previously forecast $500bn in AI-driven revenue through to the end of 2026 based on “high confidence demand and purchase orders” for Nvidia’s latest Blackwell and Rubin hardware.
The prediction of $1tn in revenue from AI hardware is far higher than Wall Street consensus estimates for Nvidia’s total revenue.
Analysts’ forecasts for its 2027 and 2028 fiscal years — running through to the end of January 2028 — total about $835bn, according to CapitalIQ.
Huang said increasing demand for computing power was being driven by popular tools such as Anthropic’s Claude Code and the need for “inference” — the process of running AI models and applications.
During his two-hour keynote, Huang announced a range of initiatives, from robotaxi partnerships to a chip designed for orbital data centres, a concept Elon Musk has pitched as central to his vision for uniting SpaceX with xAI.
He also unveiled an addition to its family of chips, the Groq 3 “language processing unit”, which has been designed to speed up how AI systems respond to users’ queries.
The move shows Nvidia is looking to shore up its leading position in the AI chip market by exploring new chip architectures, departing from its history of offering a single GPU chip for AI workloads.
“We are in volume production now,” he said. “We’ll ship [Groq 3] in the second half [of 2026]probably in the Q3 timeframe.”
The chip will be manufactured by South Korea’s Samsung, Huang said, a departure for Nvidia as it has typically used Taiwan Semiconductor Manufacturing Company for building its AI processors.
It will be the first new product to come out of Nvidia’s licensing deal with chip start-up Groq late last year, under which it also hired its founder Jonathan Ross — who previously helped create Google’s AI chip.
Inference would only become more important as more companies adopt personal AI agent tools such as OpenClaw, Huang said.
OpenClaw, which allows users to create their own AI assistants that tap into their personal data and can perform tasks independently, has been a viral success this year, particularly in China.
In February, OpenAI hired Peter Steinberger, the founder of OpenClaw, which is freely available under an open-source licence.
Huang compared OpenClaw with other open-source tools that have underpinned the tech industry, from the Linux operating system which now dominates data centres to the Hypertext Transfer Protocol that allows browsers to load web pages. “This is the new computer,” he said.
Nvidia is offering a layer of software underneath OpenClaw’s agents, dubbed “NemoClaw”, which it says will create privacy and security guardrails that the standard product has so far lacked.
