· Contributors · Organizations · Search
Improving Communication Asynchrony and Concurrency for Adaptive MPI Endpoints
DescriptionThread-based MPI runtimes, which associate private communication contexts or endpoints with each thread, rather than sharing a single context across a multithreaded process, have been proposed as an alternative to MPI's traditional multithreading models. Adaptive MPI is one such implementation, and in this work we identify and overcome shortcomings in its support for point-to-point communication. We examine also the consequences of MPI's messaging semantics on its runtime and investigate how its design can be improved for applications that do not require the full messaging semantics. We show that the issues for AMPI reduce to similar problems first identified in the context of efficient MPI+X support. Our focus is on enhancing AMPI's support for asynchrony and concurrency while still optimizing for communication locality through a novel locality-aware message matching scheme. We compare performance with and without the relaxed messaging semantics and our associated optimizations.
Next PresentationNext PresentationExascale MPI – Lunch Break