Performance Portability of Sparse Block Diagonal Matrix Multiple Vector Multiplications on GPUs
DescriptionThe emergence of multiple accelerator based computer architectures and programming models makes it challenging to achieve performance portability for large-scale scientific simulation software. In this paper, we focus on a sparse block diagonal matrix multiple vector (SpMM) computational kernel and discuss techniques that can be used to achieve performance portability on NVIDIA and AMD based accelerators using CUDA, HIP, OpenACC, Kokkos. We show that performance portability can vary significantly across programming models, GPU architectures, and problem settings, up to 52x in the explored problems. Our study visits the performance portability aggregation metric to guide the development and the selection of performance portable variants.
Event Type
TimeSunday, 13 November 202211:37am - 12pm CST
Registration Categories
Performance Portability
Session Formats
Back To Top Button