First Nvidia Hopper Benchmark Tests

Newest MLPerf inferencing results include tests of new chips, slimmed-down neural nets

It’s time for the “Olympics of machine learning” again, and if you’re tired of seeing Nvidia at the top of the podium over and over, too bad. At least this time, the GPU powerhouse put a new contender into the mix, its Hopper GPU, which delivered as much as 4.5 times the performance of its predecessor and is due out in a matter of months. But Hopper was not alone in making it to the podium at MLPerf Inferencing v2.1. Systems based on Qualcomm’s AI 100 also made a good showing, and there were other new chips, new types of neural networks, and even new, more realistic ways of testing them.

Before I go on, let me repeat the canned answer to “What the heck is MLPerf?”

MLPerf is a set of benchmarks agreed upon by members of the industry group MLCommons. It is the first attempt to provide apples-to-apples comparisons of how good computers are at training and executing (inferencing) neural networks. In MLPerf’s inferencing benchmarks, systems made up of combinations of CPUs and GPUs or other accelerator chips are tested on up to six neural networks that perform a variety of common functions—image classification, object detection, speech recognition, 3D medical imaging, natural-language processing, and recommendation. The networks had already been trained on a standard set of data and had to make predictions about data they had not been exposed to before.