2nd Workshop on Principles of Distributed Learning

Co-located with International Symposium on Distributed Computing DISC 2023

Context and Motivations

Machine learning (ML) algorithms are now ubiquitous in many aspects of our lives. Their recent success essentially relies on the advent of a new generation of processing units (e.g., GPU or TPU) together with parallel and distributed systems that enable efficient utilization of an ever-increasing amount of data and computational resources. This unlocks the use of ML algorithms for many data-oriented high-stake applications such as medicine, finance or recommendation algorithms. Beyond parallel programming, that distributes computational tasks across machines within a data center, technology companies have recently made enormous investments in federated learning (FL), which aims to deploy distributed methodologies on edge devices (e.g., laptops or smartphones). This unprecedented rise in the design and implementation of distributed schemes is commonly referred to as distributed ML. Nevertheless, due to its distributed nature, distributed ML suffers from several issues including (but not limited to) asynchronicity, system failures, heterogeneous sampling, and consistency. These issues constitute important questions both for the academic and industrial world. The purpose of our workshop is to gather researchers that address the challenges in distributed ML, and facilitate fruitful collaborations between the two communities of distributed computing and machine learning.

Organisation & Tentative Schedule

The workshop will be held on 13th of October, 2023 in collaboration with DISC'2023 in L’Aquila, Italy. This event is organized in two parts, a tutorial in the morning for non experts and a series of invited talks in the afternoon that aim at presenting the latest results in the field and foster ideas and collaborations.

  • The purpose of the tutorial is to present the basics of this new field, at the intersection between ML and distributed computing. A special focus will be given on connections between robustness issues in distributed ML and Byzantine resilience in distributed computing. The tutorial will be given by Nirupam Gupta (EPFL), Rafael Pinot (EPFL) and Rachid Guerraoui (EPFL)

  • The purpose of our invited talks is to gather people working on addressing the challenges in distributed ML, discuss ideas that have been published, or will be published. There will be no formal proceedings for the workshop. Each presenter, listed below, will be allotted a time of 30 minutes to present their work.


-- 9.00-9.10: Opening remarks

-- 9.10-10.30: Rafael Pinot (EPFL), Tutorial Part - I: Introduction to Robust Machine-Learning

Click here to get access to the slides

-- 10.30-11.00: Coffee break

-- 11.00-12.20: Nirupam Gupta (EPFL), Tutorial Part - II: Robust ML with Privacy Constraints

Click here to get access to the slides

-- 12.20-14.00: Lunch

-- 14.00-14.30: Salma Kharrat (KAUST), FilFL: Client Filtering for Optimized Client Participation in Federated Learning

Click here to get access to the slides

-- 14.30-15.00: Marco Canini (KAUST), Why Federated Learning isn’t like PAPER?

Click here to get access to the slides

-- 15.00-15.30: Ce Zhang (ETH Zürich), Optimizing Communications and Data for Distributed and Decentralized Learning

-- 15.30-16.00: Coffee break

-- 16.00-16.30: Lili Su (Northeastern University), A Non-parametric View of FedAvg and FedProx: Beyond Stationary Points

Click here to get access to the slides

-- 16.30-17.00: Alberto Pedrouzo Ulloa (CEA LIST), Practical Multi-Key Homomorphic Encryption for Federated Average Aggregation

Click here to get access to the slides

-- 17.00-17.30: Closing remarks

See Also the 1st Edition of the Workshop (Co-located with PODC 2022)