PODC 2022 Workshop: Principles of Distributed Learning

The objective of the workshop is to gather people working in the important field of distributed machine learning to discuss ideas that have been published, or will be published. There will be no formal proceedings for the workshop. Each presenter, listed below, will be allotted a time of 20 minutes to present their work. The workshop will be held on July 25, 2022 in collaboration with PODC’22 at Salerno, Italy.

The program can be found here.

Presenter Institution Tentative Title
Youssef Allouah EPFL, Switzerland ROBUST SPARSE VOTING
Hagit Attiya Technion, Israel ASYNCHRONOUS DISTRIBUTED MACHINE LEARNING
Ce Zhang ETH Zurich, Switzerland SCALING UP DISTRIBUTED LEARNING WITH SYSTEM RELAXATIONS: BAGUA AND BEYOND
Nirupam Gupta EPFL, Switzerland THE CRUCIAL ROLE OF MOMENTUM IN BYZANTINE LEARNING
Suhas Diggavi UCLA, USA ON PRIVACY AND SECURITY IN FEDERATED LEARNING
Konstantin Burlachenko KAUST, Saudi Arabia MARINA: FASTER NON-CONVEX DISTRIBUTED LEARNING WITH COMPRESSION
Sadegh Farhadkhani EPFL, Switzerland COLLABORATIVE LEARNING IS AN AGREEMENT PROBLEM
Rafael Pinot EPFL, Switzerland CAN BYZANTINE LEARNING BE PRIVATE?
Dan Alistarh IST, Austria ELASTIC CONSISTENCY: A GENERAL CONSISTENCY MODEL FOR DISTRIBUTED OPTIMIZATION
Waheed Bajwa Rutgers University, USA SCALABLE ALGORITHMS FOR DISTRIBUTED PRINCIPAL COMPONENT ANALYSIS
Anne-Marrie Kermarrec EPFL, Switzerland FRUGAL DISTRIBUTED LEARNING
Nir Shavit MIT, USA TISSUE VS SILICON: MUSINGS ON THE FUTURE OF DEEP LEARNING HARDWARE AND SOFTWARE
Indranil Gupta UIUC, USA HAMMER OR GAVEL. OR HOW I LEARNT TO STOP LEARNING AND LOVE THE OLD-FASHIONED ALGORITHM
Lili Su Northeastern University, USA A NON-PARAMETRIC VIEW OF FEDAVG AND FEDPROX: BEYOND STATIONARY POINTS
Arnaud Grivet Sébert CEA, France MACHINE LEARNING WITOUT JEOPARDIZING THE DATA
Marco Canini KAUST, Saudi Arabia ACCELERATED DEEP LEARNING VIA EFFICIENT, COMPRESSED AND MANAGED COMMUNICATION