Nvidia’s H100 AI Processor Promises Next-Gen Performance
- By DSAITrends editors
- May 10, 2022
Slated for sale later this year, Nvidia’s upcoming H100 AI acceleration processor recently found its way into the news again as new details about its capabilities were revealed in new reports.
Next-gen AI performance
Touted as the next-generation flagship of Nvidia’s AI processor solution for the data center, the H100 competes directly with the likes of AMD's MI250X and Google's TPU to give AI developers the performance to speed up their research and build more advanced AI models.
Under the hood, the H100 GPU incorporates some 80 billion transistors that at 814 square millimeters stands at the edge of today’s chipmaking equipment.
It supports the new HBM3 standard which allows for up to 80 GB of onboard memory at 3 TB/s speeds – in comparison, the RTX 3090 Ti offers just 1 TB/s of bandwidth.
Each H100 GPU is made up of 144 SMs (Streaming Multiprocessors) featured in a total of 8 GPCs (Graphics Processing Clusters). In terms of performance, CNET reports that the H100 offers 4000 TFLOPs of FP8, 2000 TFLOPs of FP16, 1000 TFLOPs of TF32 and 60 TFLOPs of FP64 Compute performance.
Nvidia says the H100 is adapted for an AI processing approach called transformers and is particularly optimal for natural language processing (NLP) tasks.
The chip giant also estimates that the H100 is six times faster than the A100 predecessor that the company launched two years ago – in certain scenarios. Of course, this assumes supporting infrastructure consisting of the new SXM connector to accommodate its higher power consumption and use of the new FP8 data type.
Organizations that need more processing prowess can turn to Nvidia's DGX H100 System which houses eight Hopper GPUs in a rack-mountable chassis.
NLP researchers and product developers can work faster with the H100, says Ian Buck, vice president of Nvidia's hyperscale and high-performance computing group. Buck said: “What took months should take less than a week.”
According to a report on Tom’s Hardware, the PCIe variant of the H100 was recently spotted for sale on the website of a Japanese retailer for a price of USD36,400, inclusive of a tax of around 10%. This puts it at a slightly higher price than the A100, though the availability of supplies remains to be seen.
Additional information about the Nvidia H100 can be found here.
Image credit: iStockphoto/Marko Rupena