Supporting Efficient Workflow Deployment of Federated Learning Systems on the Computing Continuum
SessionResearch Posters Display
DescriptionFederated Learning (FL) is a distributed Machine Learning paradigm aiming to collaboratively learn a shared model while considering privacy preservation by letting the clients process their private data locally. In the Computing Continuum context (edge-fog-cloud ecosystem), FL raises several challenges such as supporting very heterogeneous devices and optimizing massively distributed applications.
We propose a workflow to better support and optimize FL systems across the Computing Continuum by relying on formal descriptions of the infrastructure, hyperparameter optimization and model retraining in case of performance degradation. We motivate our approach by providing preliminary results using a human activity recognition dataset. The next objective will be to implement and deploy our solution on the Grid’5000 testbed.
During the poster session, I will start by presenting the main problems for applying FL in the Computing Continuum and how our approach is tackling it. Next I will present preliminary results and discuss the remaining challenges.
We propose a workflow to better support and optimize FL systems across the Computing Continuum by relying on formal descriptions of the infrastructure, hyperparameter optimization and model retraining in case of performance degradation. We motivate our approach by providing preliminary results using a human activity recognition dataset. The next objective will be to implement and deploy our solution on the Grid’5000 testbed.
During the poster session, I will start by presenting the main problems for applying FL in the Computing Continuum and how our approach is tackling it. Next I will present preliminary results and discuss the remaining challenges.