Education


The lab is teaching the following courses:


The lab taught in the past the following courses:



Projects

Master Projects

DCL offers master projects in the following areas:

  • On the design and implementation of scalable and secure blockchain algorithms: Consensus has recently gained in popularity with the advent of blockchain technologies. Unfortunately, most blockchains do not scale due, in part, to their centralized (leader-based) limitation. We recently designed a promising fully decentralised (leader-less) algorithm that promises to scale to large networks. The goal of this project is to implement it in rust and compare its performance on AWS instances against a traditional leader-based alternative like BFT-Smart whose code will be provided. Contact Vincent Gramoli for more information.
  • Making Blockchain Accountable: Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact Vincent Gramoli for more information.
  • Robust mean estimation: In recent years, many algorithms have been proposed to perform robust mean estimation, which has been shown to be equivalent to robust gradient-based machine learning. A new concept has been proposed to define the performance of a robust mean estimator, called the averaging constant (along with the Byzantine resilience). This research project consists of computing the theoretical averaging constant of different proposed robust mean estimators, and to study their empirical performances on randomly generated vectors. Contact Sadegh Farhadkhani for more information.
  • Accelerate Byzantine collaborative learning: Our recent NeurIPS paper proposed algorithms for collaborative machine learning in the presence of Byzantine nodes, which have been proved to be near optimal with respect to optimality at convergence. However, these algorithms require all-to-all communication at every round, which is suboptimal. This research consists of designing a practical solution to Byzantine collaborative learning, based on the idea of a random communication network at each round, with both theoretical guarantees and practical implementation. Contact Sadegh Farhadkhani for more information.
  • Decentralize Tournesol’s learning algorithms: The Tournesol platform leverages the contributions of its community of contributors to assign a « should be more recommended » score to YouTube videos rated by the contributors, using a learning algorithm. Currently, the computations are performed on a central server. But as Tournesol’s user base grows, and as more sophisticated learning algorithms are considered for deployment, there is a growing need to decentralize the computations of the learning algorithm. This project aims to build a framework, which will enable Tournesol users to run part of the computations of Tournesol’s scores directly in their browsers. Contact Lê Nguyên Hoang for more information.
  • Listening to the silent majority: Vanilla machine learning from user-generated data inevitably favors those who generated the most amounts of data. But this means that learning algorithms will be optimized for these users, rather than for the silent majority. This research aims to correct for this bias, by trying to infer what data the majority would have likely generated, and by inferring what the models would have learned if the silent majority’s data was included in the training of the models. It involves both designing algorithms, proving correctness and implementing them. This research is motivated by the Tournesol project. Contact Lê Nguyên Hoang for more information.
  • Should experts be given more voting rights?: This is a question that Condorcet tackled in 1785, through what is now known as the jury problem. However, his model was crude and does not apply to many critical problems, e.g. determining if a video on vaccines should be largely recommended. This research aims to better understand how voting rights should be allocated, based not only on how likely voters are to be correct, but also on the correlations between the voters’ judgments. So far, it involves mostly a theoretical analysis. This research is motivated by the Tournesol project. Contact Lê Nguyên Hoang for more information.
  • Probabilistic Byzantine Resilience: Development of high-performance, Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact Matteo Monti to get more information.
  • Distributed coordination using RDMA. RDMA (Remote Direct Memory Access) allows accessing a remote machine's memory without interrupting its CPU. This technology is gaining traction over the last couple of years, as it allows for the creation of real-time distributed systems. RDMA allows for communication to take place close to the μsec scale, which enables the design and implementation of systems that process requests in only tens of μsec. Current research focuses on achieving real-time failure detection through a combination of novel algorithm design, latest hardware and linux kernel customization. Fast failure detection over RDMA brings the notion of availability to a new level, essentially allowing modern systems to enter the era of 7 nines of availability. Contact Athanasios Xygkis and Antoine Murat for more information.


  • Byzantine-resilient heterogeneous GANs: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact Andrei Kucharavy for more information.


  • GANs with Transformers: Since their introduction in 2017, the Transformer architecture revolutionized the NLP machine learning models. Thanks to the scalability of self-attention only architectures, the models can now scale into trillions of parameters, allowing human-like capacities of text generation. However, they are not without their own shortcomings, notably due to their max-likelihood training mode over data that contains potentially undesirable statistical associations. An alternative approach to generative learning - Generative Adversarial Networks (GANs) - perform remarkably well when it comes to images, but have until recently struggled with texts, due to their sequential and discrete nature that is not compatible with gradient back-propagation they need to train. Some of those issues have been solved, but a major one - their scalability due to usage of RNNs instead of pure self-attention architectures. Previously, we were able to show that it is impossible to trivially replace RNN layers with Transformer layers (https://arxiv.org/abs/2108.12275, presented in RANLP2021). This project will be building on those results and attempting to create stable Transformer-based Text GANs based on the tricks known to stabilize Transformer training or to attempt to theoretically demonstrate the inherent instability of Transformer-derived architectures in adversarial regime. You will need a solid background knowledge of linear algebra, acquaintance with the theory of machine learning, specifically neural networks, as well as experience with scientific computing in Python, ideally with PyTorch experience. Experience with NLP desirable, but not required.


Semester Projects

If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project.

EPFL I&C duration, credits and workload information are available here. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.