SC22 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Workshops Archive

Compressed In-Memory Graphs for Accelerating GPU-Based Analytics


Workshop: IA^3 2022 - 12th Workshop on Irregular Applications: Architectures and Algorithms

Authors: Noushin Azami and Martin Burtscher (Texas State University)


Abstract: Processing large graphs has become an important irregular workload. We present Massively Parallel Log Graphs (MPLG) to accelerate GPU graph codes, including highly optimized codes. MPLG combines a compressed in-memory representation with low-overhead parallel decompression. This yields a speedup if the boost in memory performance due to the reduced footprint outweighs the overhead of the extra instructions to decompress the graph on the fly. However, achieving a sufficiently low overhead is difficult, especially on GPUs with their high-bandwidth memory. Prior work has only successfully employed similar ideas on CPUs, but those approaches exhibit limited parallelism, making them unsuitable for GPUs. On large real-world inputs, MPLG speeds up graph analytics by up to 67% on a Titan V GPU. Averaged over 15 graphs from several domains, it improves the performance of Rodinia’s breadth-first search by 11.9%, Gardenia’s connected components by 5.8%, and ECL’s graph coloring by 5.0%.


Website:






Back to IA^3 2022 - 12th Workshop on Irregular Applications: Architectures and Algorithms Archive Listing



Back to Full Workshop Archive Listing