HammingMesh: A Network Topology for Large-Scale Deep Learning
SessionHPC Network Architecture
DescriptionNumerous microarchitectural optimizations unlocked tremendous processing power for deep neural networks that in turn fueled the AI revolution. With the exhaustion of such optimizations, the growth of modern AI is now gated by the performance of training systems, especially their data movement. Instead of focusing on single accelerators, we investigate data-movement characteristics of large-scale training at full system scale. Based on our workload analysis, we design HammingMesh, a novel network topology that provides high bandwidth at low cost with high job scheduling flexibility. Specifically, HammingMesh can support full bandwidth and isolation to deep learning training jobs with two dimensions of parallelism. Furthermore, it also supports high global bandwidth for generic traffic. Thus, HammingMesh will power future large-scale deep learning systems with extreme bandwidth requirements.
Event Type
Paper
TimeTuesday, 15 November 202211am - 11:30am CST
LocationC141-143-149
TP
Architectures
Networks
Best Reproducibility Advancement Finalist
Recorded
Archive
view