Compressed In-Memory Graphs for Accelerating GPU-Based Analytics
DescriptionProcessing large graphs has become an important irregular workload. We present Massively Parallel Log Graphs (MPLG) to accelerate GPU graph codes, including highly optimized codes. MPLG combines a compressed in-memory representation with low-overhead parallel decompression. This yields a speedup if the boost in memory performance due to the reduced footprint outweighs the overhead of the extra instructions to decompress the graph on the fly. However, achieving a sufficiently low overhead is difficult, especially on GPUs with their high-bandwidth memory. Prior work has only successfully employed similar ideas on CPUs, but those approaches exhibit limited parallelism, making them unsuitable for GPUs. On large real-world inputs, MPLG speeds up graph analytics by up to 67% on a Titan V GPU. Averaged over 15 graphs from several domains, it improves the performance of Rodinia’s breadth-first search by 11.9%, Gardenia’s connected components by 5.8%, and ECL’s graph coloring by 5.0%.
Event Type
Workshop
TimeFriday, 18 November 202211:30am - 11:50am CST
LocationC144-145
Registration Categories
W
Tags
Accelerator-based Architectures
Algorithms
Architectures
Big Data
Data Analytics
Parallel Programming Languages and Models
Productivity Tools
Session Formats
Recorded
Back To Top Button