SC22 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Birds of a Feather Archive

Benchmarking across HPC Architectures


Authors: Olga Pearce (Lawrence Livermore National Laboratory, Texas A&M University), Brian Austin (Lawrence Berkeley National Laboratory (LBNL)), Jens Domke (RIKEN Center for Computational Science (R-CCS), RIKEN), Todd Gamblin (Lawrence Livermore National Laboratory), Josef Weidendorfer (Leibniz Supercomputing Centre, Technical University Munich), Christopher Zimmer (Oak Ridge National Laboratory (ORNL))

Abstract: HPC centers around the world use benchmarks to evaluate their machines and to engage with vendors during procurement. The goal of this BoF is twofold. First, a series of short presentations will gather information on the state of the art methodologies for creating and validating the benchmarking sets. Second, an open discussion will gather community feedback on pitfalls of the current methodologies and how these methodologies should evolve to accommodate the growing diversity of the computational workloads and HPC architectures. The intended audience is HPC application developers and users, teams benchmarking HPC data centers, HPC vendors, and performance researchers.

Meeting_notes


Long Description: HPC centers around the world use benchmarks to evaluate their machines and to engage with vendors during procurement and through the lifetime of the HPC system. The goal of this BoF is twofold. First, a series of short presentations will gather information on the state of the art methodologies for creating and validating the benchmarking sets. Second, an open discussion will gather community feedback on pitfalls of the current methodologies and how these methodologies should evolve to accommodate the growing diversity of the computational workloads and HPC architectures. The intended audience is HPC application developers and users, teams benchmarking HPC data centers, HPC vendors, and performance researchers.

Discussion points with the audience: - How to design benchmarks that are comparable across architectures (e.g., with/without accelerators) - How to down select an existing application mix to representative proxies - How to use information from data center monitoring data to guide decisions - How to capture the behavior of energy saving tools like EAR/GeoPM in the benchmarks (which are expected to be used later in production) - How to design adequate performance models for extrapolating to larger systems - Reproducibility (make sure we can rebuild/rerun the benchmark in a year again)


URL: https://github.com/LLNL/benchmarking-BoF-SC22


Back to Birds of a Feather Archive Listing