Blocking Sparse Matrices to Leverage Dense-Specific Multiplication
DescriptionResearch to accelerate matrix multiplication, pushed by the growing computational demands of deep learning, has sprouted many efficient architectural solutions, such as NVIDIA’s Tensor Cores. These accelerators are designed to process efficiently a high volume of small dense matrix products in parallel. However, it is not obvious how to leverage these accelerators for sparse matrix multiplication. A natural way to adapt the accelerators to this problem is to divide the matrix into small blocks, and then multiply only the nonzero blocks. In this paper, we investigate ways to reorder the rows of a sparse matrix to reduce the number of nonzero blocks and cluster the nonzero elements into a few dense blocks. While this pre-processing can be computationally expensive, we show that the high speed-up provided by the accelerators can easily repay the cost, especially when several multiplications follow one reordering.
Event Type
Workshop
TimeFriday, 18 November 20229:50am - 10am CST
LocationC144-145
W
Accelerator-based Architectures
Algorithms
Architectures
Big Data
Data Analytics
Parallel Programming Languages and Models
Productivity Tools
Recorded