NVIDIA unveils Blackwell superchip for AI

  • NVIDIA has revealed its latest chipset made specifically with AI computing in mind – the Blackwell B200.
  • It boasts 208 billion transistors and can handle up to 20 petaflops of FP4 operations.
  • Compared to NVIDIA’s previous AI-specific chip, the Hopper, Blackwell can deliver up to four times the AI training performance.

NVIDIA this week held its GTC conference in Silicon Valley, where more than 11 000 people attended the opening keynote of CEO Jensen Huang. The leather jacket-clad executive unveiled the company’s latest superchip during the keynote – Blackwell.

Named after David Harold Blackwell, a mathematician from the University of California, Berkeley who specialised in game theory and statistics, this new superchip is the successor to the Hopper iteration that NVIDIA debuted only two years ago, serving as its latest offering when it comes to AI-related computing tasks.

The chipset, which comprises of new components – namely two GPUs and one CPU connected within a node via a high-speed interlink that operates at 10 terabytes per second – is said to be quite the step up from the Hopper series.

Judging by the numbers that NVIDIA is throwing around, that may well be the case.

Jensen Huang holding Blackwell (left) and Hopper (right).

Here the company explained that Blackwell will be available in three different data centre-specific skews – the B100, B200, and GB200.

It is the latter that has grabbed everyone’s attention as it is said to have seven times the inference performance of the Hopper GH200, along with delivering up to four times the AI training performance, as well as being a staggering 25 times more power efficient.

“The amount of energy we save, the amount of networking bandwidth we save, the amount of wasted time we save, will be tremendous. The future is generative … which is why this is a brand new industry. The way we compute is fundamentally different. We created a processor for the generative AI era,” noted Huang.

Added to this is up to 20 petaflops of floating point precision per operation (FP4) on a single GPU, which is half that of the Hopper. The lower the FP number the better, which means the new Blackwell chipset is ideal for large models requiring lots of compute.

“In the future, data centers are going to be thought of … as AI factories. Their goal in life is to generate revenues, in this case, intelligence,” he added, likely hoping that NVIDIA’s silicon will be the hardware of choice used in most datacentre environments that prioritise AI services to customers.

It remains to be seen when the power of Blackwell will be seen in our part of the world, but NVIDIA already has the buy in from several of the biggest AI players in the industry. To that end, Google, AWS, Dell Technologies, Meta, Microsoft, OpenAI, Oracle, and xAI have signalled their interest in the new superchip.

“Blackwell is being adopted by every major global cloud services provider, pioneering AI companies, system and server vendors, and regional cloud service providers and telcos all around the world,” highlighted NVIDIA in a blog post.

“The whole industry is gearing up for Blackwell,” enthused Huang.

If you’re inclined to watch the two-hour long keynote, hit play on the video below.


About Author


Related News