PODC 2022 Workshop: Principles of Distributed Learning

The objective of the workshop is to gather people working in the important field of distributed machine learning to discuss ideas that have been published, or will be published. There will be no formal proceedings for the workshop. Each presenter, listed below, will be allotted a time of 20 minutes to present their work. The workshop will be held on July 25, 2022 in collaboration with PODC’22 at Salerno, Italy.

The program can be found here. Click on the titles to get access to the slides.

Presenter Institution Tentative Title
Youssef Allouah EPFL, Switzerland ROBUST SPARSE VOTING
Hagit Attiya Technion, Israel ASYNCHRONOUS DISTRIBUTED MACHINE LEARNING
Sadegh Farhadkhani EPFL, Switzerland COLLABORATIVE LEARNING IS AN AGREEMENT PROBLEM
Dan Alistarh IST, Austria ELASTIC CONSISTENCY: A GENERAL CONSISTENCY MODEL FOR DISTRIBUTED OPTIMIZATION
Waheed Bajwa Rutgers University, USA SCALABLE ALGORITHMS FOR DISTRIBUTED PRINCIPAL COMPONENT ANALYSIS
Nirupam Gupta EPFL, Switzerland THE CRUCIAL ROLE OF MOMENTUM IN BYZANTINE LEARNING
Anne-Marie Kermarrec EPFL, Switzerland FRUGAL DISTRIBUTED LEARNING
Nir Shavit MIT, USA TISSUE VS SILICON: MUSINGS ON THE FUTURE OF DEEP LEARNING HARDWARE AND SOFTWARE
Indranil Gupta UIUC, USA HAMMER OR GAVEL. OR HOW I LEARNT TO STOP LEARNING AND LOVE THE OLD-FASHIONED ALGORITHM
Arnaud Grivet Sébert CEA, France MACHINE LEARNING WITOUT JEOPARDIZING THE DATA
Marco Canini KAUST, Saudi Arabia ACCELERATED DEEP LEARNING VIA EFFICIENT, COMPRESSED AND MANAGED COMMUNICATION
Rafael Pinot EPFL, Switzerland CAN BYZANTINE LEARNING BE PRIVATE?