Industry's First-to-Market Supermicro NVIDIA HGX™ B200 Systems Demonstrate AI Performance Leadership on MLPerf® Inference v5.0 Results
Latest Benchmarks Show Supermicro Systems with the NVIDIA B200 Outperformed the Previous Generation of Systems with 3X the Token Generation Per Second
"Supermicro remains a leader in the AI industry, as evidenced by the first new benchmarks released by MLCommons in 2025," said
Learn more about the new MLPerf v5.0 Inference benchmarks at: https://mlcommons.org/benchmarks/inference-datacenter/
Supermicro is the only system vendor publishing record MLPerf inference performance (on select benchmarks) for both the air-cooled and liquid-cooled NVIDIA HGX™ B200 8-GPU systems. Both air-cooled and liquid-cooled systems were operational before the MLCommons benchmark start date. Supermicro engineers optimized the systems and software to showcase the impressive performance. Within the operating margin, the Supermicro air-cooled B200 system exhibited the same level of performance as the liquid-cooled B200 system. Supermicro has been delivering these systems to customers while we conducted the benchmarks.
MLCommons emphasizes that all results be reproducible, that the products are available and that the results can be audited by other MLCommons members. Supermicro engineers optimized the systems and software, as allowed by the MLCommons rules.
The SYS-421GE-NBRT-LCC (8x NVIDIA B200-SXM-180GB) and SYS-A21GE-NBRT (8x NVIDIA B200-SXM-180GB) showed performance leadership running the Mixtral 8x7B Inference, Mixture of Experts benchmarks with 129,000 tokens/second. The Supermicro air-cooled and liquid-cooled NVIDIA B200 based system delivered over 1,000 tokens/second inference for the large Llama3.1-405b model, whereas the previous generations of GPU systems have much smaller results. For smaller inferencing tasks, using the LLAMA2-70b benchmark, a Supermicro system with the NVIDIA B200 SXM-180GB installed shows the highest performance from a Tier 1 system supplier.
Specifically:
- Stable Diffusion XL (Server)
SYS-A21GE-NBRT (8x B200-SXM-180GB)
#1 queries/s, 28.92 - llama2-70b-interactive-99 (Server)
SYS-A21GE-NBRT (8x B200-SXM-180GB)
#1 Tokens/s, 62,265.70 - Llama3.1-405b (offline)
SYS-421GE-NBRT-LCC (8xB200-SXM-180GB)
#1 Tokens/s 1521.74 - Llama3.1-405b (Server)
SYS-A21GE-NBRT (8x B200-SXNM-180GB)
#1 Tokens/s, 1080.31 (for an 8-GPU node) - mixtral-8x7b (Server)
SYS-421GE-NBRT-LCC (8x B200-SXM-180GB)
#1 Tokens/s, 129,047.00 - mixtral-8x7b (Offline)
SYS-421GE-NBRT-LCC (8x B200-SXM-180GB)
#1 Tokens/s, 128,795.00
"MLCommons congratulates Supermicro on their submission to the MLPerf Inference v5.0 benchmark. We are pleased to see their results showcasing significant performance gains compared to earlier generations of systems," said
Supermicro offers a comprehensive AI portfolio with over 100 GPU-optimized systems, both air-cooled and liquid-cooled options, with a choice of CPUs, ranging from single-socket optimized systems to 8-way multiprocessor systems. Supermicro rack-scale systems include computing, storage, and network components, which reduce the time required to install them once they are delivered to a customer site.
Supermicro's NVIDIA HGX B200 8-GPU systems utilize next-generation liquid-cooling and air-cooling technology. The newly developed cold plates and the new 250kW coolant distribution unit (CDU) more than double the cooling capacity of the previous generation in the same 4U form factor. Available in 42U, 48U, or 52U configurations, the rack-scale design with the new vertical coolant distribution manifolds (CDM) no longer occupies valuable rack units. This enables eight systems, comprising 64 NVIDIA Blackwell GPUs in a 42U rack, and up to 12 systems with 96 NVIDIA Blackwell GPUs in a 52U rack.
The new air-cooled 10U NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs. Up to 4 of the new 10U air-cooled systems can be installed and fully integrated in a rack, the same density as the previous generation, while providing up to 15x inference and 3x training performance.
About
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in
Supermicro, Server
All other brands, names, and trademarks are the property of their respective owners.
View original content to download multimedia:https://www.prnewswire.com/news-releases/industrys-first-to-market-supermicro-nvidia-hgx-b200-systems-demonstrate-ai-performance-leadership-on-mlperf-inference-v5-0-results-302419115.html
SOURCE