Compiler-Enabled Optimization of Persistent MPI Operations
DescriptionMPI also includes persistent operations, which specify recurring communication patterns. The idea is that the usage of those operations can result in a performance benefit compared to the standard non-blocking communication. But in current MPI implementations this performance benefit is not really observable. We determine the message envelope matching as one of the causes of overhead. As persistent MPI requests can be used multiple times, the compiler can, in some cases, prove that message matching is only needed for the first occurrence and can be entirely skipped for subsequent usages.
We present the required compiler analysis and an implementation of a communication scheme that skips the message envelope matching. This allows us to substantially reduce the communication overhead that cannot be overlapped with computation. Using the Intel IMB-ASYNC Benchmark, we can see a communication overhead reduction of up to 95% for larger message sizes.
We present the required compiler analysis and an implementation of a communication scheme that skips the message envelope matching. This allows us to substantially reduce the communication overhead that cannot be overlapped with computation. Using the Intel IMB-ASYNC Benchmark, we can see a communication overhead reduction of up to 95% for larger message sizes.
Event Type
Workshop
TimeSunday, 13 November 202211am - 11:30am CST
LocationC147-154
W
Recorded