This is an old revision of the document!


Education


The lab is teaching the following courses:


The lab taught in the past the following courses:



Projects

Master Projects

DCL offers master projects in the following areas:

  • On the design and implementation of scalable and secure blockchain algorithms: Consensus has recently gained in popularity with the advent of blockchain technologies. Unfortunately, most blockchains do not scale due, in part, to their centralized (leader-based) limitation. We recently designed a promising fully decentralised (leader-less) algorithm that promises to scale to large networks. The goal of this project is to implement it in rust and compare its performance on AWS instances against a traditional leader-based alternative like BFT-Smart whose code will be provided. Contact Vincent Gramoli for more information.
  • Making Blockchain Accountable: Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact Vincent Gramoli for more information.
  • GAR performances on different datasets: Robust machine learning on textual data and content recommendation is critical for the safety of social media users (harassment, hate speech, etc.), but also for the reliability of scientific use of natural language processing such for processing computer programs, chemistry and drug discovery. Text datasets are known to have long-tailed distributions, which poses specific challenges for robustness, while content recommendation datasets may feature clusters of similar users. The goal of this project is to better understand the properties of different datasets, and what makes a gradient aggregation rule (e.g. Krum, trimmed mean…) better than another, given a specific text dataset (conversational chatbots, translation, github code etc.). Contact Lê Nguyên Hoang for more information.
  • Strategyproof collaborative filtering: In collaborative filtering, other users' inputs are used to generalize the preferences of a given user. Such an approach has been critical to improve performance. However, it exposes each user to being manipulated by the inputs of malicious users, which is arguably currently occurring on social medias. In this theoretical project, we search for Byzantine-resilient and strategyproof learning algorithms to perform something akin to collaborative filtering. This would also have important applications for implicit voting systems on exponential-size decision sets. Contact Lê Nguyên Hoang for more information.
  • Probabilistic Byzantine Resilience: Development of high-performance, Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact Matteo Monti to get more information.
  • Distributed coordination using RDMA. RDMA (Remote Direct Memory Access) allows accessing a remote machine's memory without interrupting its CPU. This technology is gaining traction over the last couple of years, as it allows for the creation of real-time distributed systems. RDMA allows for communication to take place close to the μsec scale, which enables the design and implementation of systems that process requests in only tens of μsec. Current research focuses on achieving real-time failure detection through a combination of novel algorithm design, latest hardware and linux kernel customization. Fast failure detection over RDMA brings the notion of availability to a new level, essentially allowing modern systems to enter the era of 7 nines of availability. Contact Athanasios Xygkis and Antoine Murat for more information.


  • Byzantine-resilient heterogeneous GANs: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact Andrei Kucharavy for more information.


  • GANs with Transformers: Since their introduction in 2017, the Transformer architecture

revolutionized the NLP machine learning models. Thanks to the scalability of self-attention only architectures, the models can now scale into trillions of parameters, allowing human-like capacities of text generation. However, they are not without their own shortcomings, notably due to their max-likelihood training mode over data that contains potentially undesirable statistical associations. An alternative approach to generative learning - Generative Adversarial Networks (GANs) - perform remarkably well when it comes to images, but have until recently struggled with texts, due to their sequential and discrete nature that is not compatible with gradient back-propagation they need to train. Some of those issues have been solved, but a major one - their scalability due to usage of RNNs instead of pure self-attention architectures. Previously, we were able to show that it is impossible to trivially replace RNN layers with Transformer layers (https://arxiv.org/abs/2108.12275, presented in RANLP2021). This project will be building on those results and attempting to create stable Transformer-based Text GANs based on the tricks known to stabilize Transformer training or to attempt to theoretically demonstrate the inherent instability of Transformer-derived architectures in adversarial regime.

You will need a solid background knowledge of linear algebra, acquaintance with the theory of machine learning, specifically neural networks, as well as experience with scientific computing in Python, ideally with PyTorch experience. Experience with NLP desirable, but not required.


Semester Projects

If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project.

EPFL I&C duration, credits and workload information are available here. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.