SC22 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Workshops Archive

Reinforcement Learning Strategies for Compiler Optimization in High Level Synthesis


Workshop: LLVM-HPC2022: The Eighth Workshop on the LLVM Compiler Infrastructure in HPC

Authors: Hafsah Shahzad and Martin Herbordt (Boston University)


Abstract: High Level Synthesis (HLS) offers a possible programmability solution for FPGAs but currently delivers far lower hardware quality than circuits written using Hardware Description Languages (HDLs). One reason is because the standard set of code optimizations used by CPU compilers, such as LLVM, are not well suited for an FPGA backend.

While much work has been done employing reinforcement learning for compilers in general, that directed toward HLS is limited and conservative. We expand both the number of learning strategies for HLS compiler tuning and the metrics used to evaluate their impact. Our results show improvements over state-of-art for each standard benchmark evaluated and learning quality metric investigated. Choosing just the right strategy can give an improvement of 23x in learning speed, 4x in performance potential, 3x in speedup over -O3, and has the potential to largely eliminate the fluctuation band from the final results.





Back to LLVM-HPC2022: The Eighth Workshop on the LLVM Compiler Infrastructure in HPC Archive Listing



Back to Full Workshop Archive Listing