Learning to Parallelize Source Code via OpenMP with Transformers
DescriptionIn past years, the world has switched to many-core and multi-core shared memory architectures. As a result, there is a growing need to utilize these architectures by introducing shared memory parallelization schemes, such as OpenMP, to software applications. Nevertheless, introducing OpenMP into code, especially legacy code, is challenging due to pervasive pitfalls in management of parallel shared memory. To facilitate the performance of this task, many source-to-source (S2S) compilers have been created over the years, tasked with inserting OpenMP directives into code automatically. In addition to having limited robustness to their input format, these compilers still do not achieve satisfactory coverage and precision in locating parallelizable code and generating appropriate directives. In this work, we propose leveraging recent advances in machine learning techniques, specifically in natural language processing (NLP) - the transformers model, to suggest the need for an OpenMP directive or specific clauses (reduction and private).
Event Type
Posters
Research Posters
TimeThursday, 17 November 20228:30am - 5pm CST
LocationC1-2-3
Registration Categories
TP
XO/EX
Poster view
Back To Top Button