Workshop: PMBS22: The 13th International Workshop on Performance Modeling, Benchmarking, and Simulation of High-Performance Computer Systems
Authors: Adrian Perez Dieguez and Khaled Z. Ibrahim (Lawrence Berkeley National Laboratory (LBNL)); Min Choi and Bryan M. Wong (University of California, Riverside); and Xinran Zhu (Cornell University)
Abstract: Time-Dependent Density Functional Theory (TDDFT) workloads are an example of high-impact computational methods that require leveraging the performance of HPC architectures. However, finding the optimal values of their performance-critical parameters raises performance portability challenges that must be addressed. In this work, we propose an ML-based tuning methodology based on Bayesian optimization and transfer learning to tackle the performance portability for TDDFT codes in HPC systems. Our results demonstrate the effectiveness of our transfer-learning proposal for TDDFT workloads, which reduced the number of executed evaluations by up to 86% compared to an exhaustive search for the global optimal performance parameters on the Cori and Perlmutter supercomputers. Compared to a Bayesian-optimization search, our proposal reduces the required evaluations by up to 46.7% to find the same optimal runtime configuration. Overall, this methodology can be applied to other scientific workloads for current and emerging high-performance architectures.
Back to PMBS22: The 13th International Workshop on Performance Modeling, Benchmarking, and Simulation of High-Performance Computer Systems Archive Listing