Workshop: IA^3 2022 - 12th Workshop on Irregular Applications: Architectures and Algorithms
Authors: Paolo Sylos Labini (Free University of Bozen-Bolzano, Italy); Flavio Vella (University of Trento, Italy); Massimo Bernaschi (Applied Computing, National Research Council (IAC-CNR), Italy); Francesco Silvestri (University of Padova); and Werner Nutt (Free University of Bozen-Bolzano, Italy)
Abstract: Research to accelerate matrix multiplication, pushed by the growing computational demands of deep learning, has sprouted many efficient architectural solutions, such as NVIDIA’s Tensor Cores. These accelerators are designed to process efficiently a high volume of small dense matrix products in parallel. However, it is not obvious how to leverage these accelerators for sparse matrix multiplication. A natural way to adapt the accelerators to this problem is to divide the matrix into small blocks, and then multiply only the nonzero blocks. In this paper, we investigate ways to reorder the rows of a sparse matrix to reduce the number of nonzero blocks and cluster the nonzero elements into a few dense blocks. While this pre-processing can be computationally expensive, we show that the high speed-up provided by the accelerators can easily repay the cost, especially when several multiplications follow one reordering.
Back to IA^3 2022 - 12th Workshop on Irregular Applications: Architectures and Algorithms Archive Listing