Parallel Computing 101
DescriptionThis tutorial provides a comprehensive overview of parallel computing, emphasizing aspects most relevant to the user. It is suitable for new users, students, managers, and anyone seeking an overview of parallel computing. It discusses software and hardware/software interaction, with an emphasis on standards, portability, and systems that are widely available.

The tutorial surveys basic parallel computing concepts using examples of engineering, scientific, and machine learning. They illustrate using MPI on distributed memory systems; OpenMP on shared memory systems; MPI+OpenMP on hybrid systems; and CUDA and compiler directives on GPUs and accelerators. It discusses numerous parallelization and load balancing approaches, and software engineering and performance improvement aspects, including the use of state-of-the-art tools.

The tutorial helps attendees make intelligent decisions by covering the primary options that are available, explaining how the different components work together and what they are suitable for. Extensive pointers to web-based resources are provided for follow-up studies.
Event Type
TimeSunday, 13 November 20228:30am - 5pm CST
Registration Categories
Directive Based Programming
Heterogeneous Systems
Parallel Programming Languages and Models
Session Formats
Back To Top Button