CoGNN: Efficient Scheduling for Concurrent GNN Training on GPUs
DescriptionGraph neural networks (GNNs) suffer from low GPU utilization due to frequent memory accesses. Existing concurrent training mechanisms cannot be directly adapted to GNNs because they fail to consider the impact of input irregularity. This requires pre-profiling the memory footprint of concurrent tasks based on input dimensions to ensure successful co-location on GPU. Moreover, massive training tasks generated from scenarios such as hyper-parameter tuning require flexible scheduling strategies. To address these problems, we propose CoGNN that enables efficient management of GNN training tasks on GPUs. Specifically, the CoGNN organizes the tasks in a queue and estimates the memory consumption of each task based on cost functions at operator basis. In addition, the CoGNN implements scheduling policies to generate task groups, which are iteratively submitted for execution. The experiment results show that the CoGNN can achieve shorter completion and queuing time for training tasks from diverse GNN models.
Event Type
Paper
TimeWednesday, 16 November 202211:30am - 12pm CST
LocationC140-142
Session Formats
Recorded
Tags
Machine Learning and Artificial Intelligence
Registration Categories
TP
Reproducibility Badges
Back To Top Button