Extending OpenMP and OpenSHMEM for Efficient Heterogeneous Computing
DescriptionHeterogeneous supercomputing systems are becoming the mainstream thanks to their powerful accelerators. However, the accelerators' special memory model and APIs increase the development complexity, and calls for innovative programming model designs. To address this issue, OpenMP has added target offloading for portable accelerator programming, and MPI allows transparent send-receive of accelerator memory buffers. Meanwhile, Partitioned Global Address Space (PGAS) languages like OpenSHMEM are falling behind for heterogeneous computing because their special memory models pose additional challenges.

We propose language and runtime interoperability extensions for both OpenMP and OpenSHMEM to enable portable remote access on GPU buffers, with minimal amount of code changes. Our modified runtime systems work in coordination to manage accelerator memory, eliminating the need for staging communication buffers. Comparing to the standard implementation, our extensions attain 6x point-to-point latency improvement, 1.3x better collective operation latency, 4.9x random access throughput, and up to 12.5% higher strong scalability.
Event Type
Workshop
TimeMonday, 14 November 202210:30am - 10:52am CST
LocationC147-154
Registration Categories
W
Tags
Applications
Architectures
Heterogeneous Systems
Hierarchical Parallelism
Parallel Programming Languages and Models
Performance
Performance Portability
Scientific Computing
Session Formats
Recorded
Back To Top Button