3rd Workshop on Principles of Distributed Learning
Co-located with ACM Symposium on Principles of Distributed Computing PODC 2024
Context and Motivations
Machine learning (ML) algorithms are now ubiquitous in many aspects of our lives. Their recent success essentially relies on the advent of a new generation of processing units (e.g., GPU or TPU) together with parallel and distributed systems that enable efficient utilization of an ever-increasing amount of data and computational resources. This unlocks the use of ML algorithms for many data-oriented high-stake applications such as medicine, finance or recommendation algorithms. Beyond parallel programming, that distributes computational tasks across machines within a data center, technology companies have recently made enormous investments in federated learning (FL), which aims to deploy distributed methodologies on edge devices (e.g., laptops or smartphones). This unprecedented rise in the design and implementation of distributed schemes is commonly referred to as distributed ML. Nevertheless, due to its distributed nature, distributed ML suffers from several issues including (but not limited to) asynchronicity, system failures, heterogeneous sampling, and consistency. These issues constitute important questions both for the academic and industrial world. The purpose of our workshop is to gather researchers that address the challenges in distributed ML, and facilitate fruitful collaborations between the two communities of distributed computing and machine learning.
Organisation & Tentative Schedule
The workshop will be held on Friday, June 21 2024 in collaboration with PODC 2024 in Nantes, France. This event is organized in around a series of invited talks in the that aim at presenting the latest results in the field and foster ideas and collaborations. The purpose of our invited talks is to gather people working on addressing the challenges in distributed ML and discuss ideas that have been published, or will be published. There will be no formal proceedings for the workshop. Each presenter will be allotted a time of 30 to 45 minutes (this is to be defined according to the number of speakers) to present their work.
Program
-- 9:15- 9:30 Opening
-- 9:30- 10:30 Aurélien Bellet (INRIA), Privacy in Decentralized Machine Learning.
-- 10:30- 10:50 Coffee break
-- 11:00- 12:00 Giovanni Neglia (INRIA), Collaborative Inference Systems.
-- 12:00-13:30 Lunch
-- 13:30- 14:30 Laurent Massoulié (INRIA), Collaborative methods for: i) Learning shared representation; ii) Exploring unknown environnements
-- 14:30- 15:30 Eduard Gorbunov (MBZUAI), Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
-- 15:30- 16:00 Coffee break
-- 16:00- 17:00 Sonia Ben Mokhtar (INRIA), Decentralized learning as an enabler for decentralized online services
-- 17:00- 18:00 Lili Su (Northeastern University), Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability
-- 18:00- 18:15 Closing
See Also the 2nd Edition of the Workshop (Co-located with DISC 2023)
Organizers
- Rachid Guerraoui (EPFL)
- Nirupam Gupta (EPFL)
- Rafael Pinot (Sorbonne Université)