Authors: Lillian Wang, Avik Malladi, and Yuede Ji (University of North Texas)
Abstract: This poster presents GPU optimizations for Sparse Deep Neural Networks using Apache TVM. Although various deep neural network models exist, SpDNNs have shown great improvements in the size and memory of neural networks. SpDNNs provide unique scalability difficulties in which optimizations and advancements can be made. Apache TVM is a machine learning compiler framework for CPUs and GPUs. It has been shown to have promising improvements for the performance, deployment, and optimizations of the networks. To evaluate its effectiveness for SpDNNs, this work builds SpDNNs with Apache TVM and compares with current SpDNNs. When testing with various datasets, TVM-based implementation can achieve faster and more efficient optimizations.
Best Poster Finalist (BP): no
Poster summary: PDF
Back to Poster Archive Listing