HyLo: A Hybrid Low-Rank Natural Gradient Descent Method
SessionHPC and ML
DescriptionThis work presents a Hybrid Low-Rank Natural Gradient Descent method, called HyLo, that accelerates the training time of deep neural networks. Natural gradient descent (NGD) requires computing the inverse of the Fisher information matrix (FIM), which is typically expensive at large-scale. Kronecker factorization methods such as KFAC attempt to improve NGD’s running time by approximating the FIM with Kronecker factors. However, the size of Kronecker factors increases quadratically as the model size grows. Instead, in HyLo, we use the Sherman-Morrison-Woodbury variant of NGD (SNGD) and propose a reformulation of SNGD to resolve its scalability issues. HyLo uses a computationally-efficient low-rank factorization to achieve superior timing for Fisher inverses. We evaluate HyLo on large models including ResNet-50, U-Net, and ResNet-32 on up to 64 GPUs. HyLo converges 1.4x-2.1x faster than the state-of-the-art distributed implementation of KFAC and reduces the computation and communication time up to 350x and 10.7x on ResNet-50.
Event Type
Paper
TimeWednesday, 16 November 20222pm - 2:30pm CST
LocationC140-142
Session Formats
Recorded
Tags
Machine Learning and Artificial Intelligence
Registration Categories
TP
Reproducibility Badges
Back To Top Button