Nvidia CEO talks up AI post-training, test learning and gigawatts | JP
Nvidia has continued to see record growth in its datacentre business, driven by the acceleration of artificial intelligence (AI) workloads.
The company reported quarterly revenue of $35.1bn, up 17% from Q2 and up 94% from a year ago. Its datacentre business, which provides graphics processing units (GPUs) for AI servers, contributed $30.8bn during the quarter, representing the bulk of the company’s revenue.
“The age of AI is in full steam, propelling a global shift to Nvidia computing,” said Jensen Huang, founder and CEO of Nvidia.
Discussing the company and next-generation chips, he added: “Demand for Hopper and anticipation for Blackwell – in full production – are incredible as foundation model makers scale pre-training, post-training and inference.
“AI is transforming every industry, company and country,” said Huang. “Enterprises are adopting agentic AI to revolutionise workflows. Industrial robotics investments are surging with breakthroughs in physical AI, and countries have awakened to the importance of developing their national AI and infrastructure.”
According to a transcript of the earnings call posted on Seeking Alpha, chief financial officer Colette Kress said sales of Nvidia H200 GPUs increased significantly to “double-digit billions”. She described this as “the fastest product ramp in our company’s history”, adding that cloud service providers accounted for approximately half of Nvidia’s datacentre sales.
During the earnings call, Huang discussed methods to improve the accuracy and scaling of large language models. “AI foundation model pre-training scaling is intact and it’s continuing,” he said. “As you know, this is an empirical law, not a fundamental physical law, but the evidence is that it continues to scale.”
Huang said that post-training scaling, which began with reinforcement learning human feedback, is now using AI feedback and synthetic generated data.
Another approach is test time scaling. “The longer it thinks, the better and higher-quality answer it produces, and it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect, and so on and so forth,” he said. “It’s a little bit like us doing the thinking in our head before we answer a question.”
Huang said increasing the performance of GPUs reduces the cost of training and AI inferencing. “We’re reducing the cost of AI so that it can be much more accessible,” he added.
Looking at factors that could curb the company’s phenomenal growth trajectory, Huang said: “Most datacentres are now a hundred megawatts to several hundred megawatts, and we’re planning on gigawatt datacentres.
“It doesn’t really matter how large the datacentres are, the power is limited,” he said, adding that what matters to Nvidia datacentre customers is how they can deliver the highest performance per watt, which translates directly into the highest revenues.