Tech

What’s the H100gpu, the chip drive generative AI?

H100gpu rare that a computer component sets pulses racing beyond the tech industry. But when Nvidia Corp issued a blowout sales forecast in May to send its market value above US$1 trillion, the star of the show was its latest graphics processing unit, the H100gpu. The new data center chip is showing investors that the buzz around generative artificial intelligence (Al) – systems that can perform a wide range of tasks at superpowered speed – is translating into real revenue, at least for Nvidia. Demand for the H100gpu is so great that some customers are having to wait as long as six months to receive it.

Thank you for reading this post, don't forget to subscribe!

What is the H100gpu?

The H100gpu, whose name is a nod to computer science pioneer Grace Hopper, is a graphics processor. It’s a type of chip that normally lives in PCs and helps gamers get the most realistic visual experience. Unlike its regular counterparts, though, the chip’s 80 billion transistors are arranged in cores that are tuned to process data at high speed, not generate images. Nvidia, founded in 1993, pioneered this market with investments in technology going back almost two decades, when it bet that the ability to do work in parallel would one day make its chips valuable in applications outside of gaming.

H100gpu

Why is the H100gpu so special?

Generative Al platforms learn to complete tasks such as translating text, summarising reports and writing computer code after being trained on vast quantities of pre-existing material. The more they see,
the better they become at things
like recognising human speech or writing job cover letters.
They develop through trial and error, making billions of
attempts to achieve proficiency and sucking up huge amounts of computing power in the process. Nvidia says the H100 is four times faster than the chip’s predecessor, the A100, at training these so-called large language models, or LLMs, and is 30 times faster when replying to user prompts. For companies racing to train their LLMs to perform new tasks, that performance edge can be critical

How did Nvidia get pole position?

It’s the world leader in so-called graphics processing units (GPUs) – the bits of a computer that generate the images you see on the screen. The most powerful GPUs, which can produce realistic-looking scenery in fast-moving video games, have multiple processing cores that perform several simultaneous computations. Nvidia’s engineers realised in the early 2000s that GPUs could be retooled to become so-called accelerators for other applications, by dividing tasks up into smaller lumps and then working on them at the same time. Just over a decade ago, Al researchers discovered that their work could finally be made practical by using this type of chip.

What’s the state of the competition?

Nvidia controls about 80 per cent of the market for the accelerators in the Al data center operated by Amazon.com. AWS, Alphabet’s Google Cloud and Microsoft’s Azure. Those companies’ in-house efforts to build these chips, and rival products from chipmakers such as Advanced Micro Devices (AMD) and Intel, haven’t made much of an impression on the accelerator market so far.

Why is that?

Nvidia has rapidly updated its offerings, including software to support the hardware, at a pace that no competitor has been able to match up till now. Chips such as Intel’s Xeon processors have fewer processing cores. While they’re capable of more complex data crunching, they’re much slower at working through the mountains of information typically used to train Al software. Nvidia’s data centre division posted a 41 per cent increase in revenue to US$15 billion in 2022

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button