This is an old revision of the document!


Education


The lab is teaching the following courses:


The lab taught in the past the following courses:



Projects

Master Projects

DCL offers master projects in the following areas:

  • On the design and implementation of scalable and secure blockchain algorithms: Consensus has recently gained in popularity with the advent of blockchain technologies. Unfortunately, most blockchains do not scale due, in part, to their centralized (leader-based) limitation. We recently designed a promising fully decentralised (leader-less) algorithm that promises to scale to large networks. The goal of this project is to implement it in rust and compare its performance on AWS instances against a traditional leader-based alternative like BFT-Smart whose code will be provided. Contact Vincent Gramoli for more information.
  • Making Blockchain Accountable: Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact Vincent Gramoli for more information.
  • GAR performances on different datasets: Robust machine learning on textual data and content recommendation is critical for the safety of social media users (harassment, hate speech, etc.), but also for the reliability of scientific use of natural language processing such for processing computer programs, chemistry and drug discovery. Text datasets are known to have long-tailed distributions, which poses specific challenges for robustness, while content recommendation datasets may feature clusters of similar users. The goal of this project is to better understand the properties of different datasets, and what makes a gradient aggregation rule (e.g. Krum, trimmed mean…) better than another, given a specific text dataset (conversational chatbots, translation, github code etc.). Contact Lê Nguyên Hoang for more information.
  • Strategyproof collaborative filtering: In collaborative filtering, other users' inputs are used to generalize the preferences of a given user. Such an approach has been critical to improve performance. However, it exposes each user to being manipulated by the inputs of malicious users, which is arguably currently occurring on social medias. In this theoretical project, we search for Byzantine-resilient and strategyproof learning algorithms to perform something akin to collaborative filtering. This would also have important applications for implicit voting systems on exponential-size decision sets. Contact Lê Nguyên Hoang for more information.
  • Probabilistic Byzantine Resilience: Development of high-performance, Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact Matteo Monti to get more information.
  • Distributed computing using RDMA and/or NVRAM. RDMA (Remote Direct Memory Access) allows accessing a remote machine's memory without interrupting its CPU. NVRAM is byte-addressable persistent (non-volatile) memory with access times on the same order of magnitude as traditional (volatile) RAM. These two recent technologies pose novel challenges and raise new opportunities in distributed system design and implementation. Contact Igor Zablotchi for more information.
  • Robust Distributed Machine Learning: With the proliferation of big datasets and models, Machine Learning is becoming distributed. Following the standard parameter server model, the learning phase is taken by two categories of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e., said Byzantine) affecting the model convergence in the learning phase. Our goal in this project is to build a system that is robust against Byzantine behavior of both parameter server and workers. Our first prototype, AggregaThor(https://mlsys.org/Conferences/2019/doc/2019/54.pdf), describes the first scalable robust Machine Learning framework. It fixed a severe vulnerability in TensorFlow and it showed how to make TensorFlow even faster, while robust. Contact Arsany Guirguis for more information.
  • Consistency in global-scale storage systems: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to Retwis, or ShareJS) to recommender systems, static content storage services (à la Facebook's Haystack), or experimenting with well-known cloud serving benchmarks (such as YCSB); please contact Adi Seredinschi or Karolos Antoniadis for further information.


  • Theory of evolution to improve GAN training: Generative adversarial networks (GANs) have achieved some spectacular results in the six years since they have been introduced. However, their training process is still fraught with issues such as mode collapse, non-convegence or gradients collapse. Those issues have still not been completely resolved, making multiple restarts and selection of well-performing Generator-Discriminator pairs part of GAN training process. The process of adversarial generator-discriminator training is not dissimilar from co-evolution of two adversarial species - such as for instance hosts and pathogens, except that rounds of mutation/recombination/selection in search of fitness optimum are replaced by gradient descent. Our goal is to investigate - both experimentally and theoretically - if we can further stabilize and improve GAN training with evolutionary mechanisms, such as speciation, aneuploidization, neutral variability buffering or meta-evolutionary mechanisms. This work wold have implications on developing efficient solutions to detecting GAN products (aka deepfakes). You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and ideally you should have some knowledge of population genetics. Contact Andrei Kucharavy for more information.
  • Text GANs: For a long time the Generative adversarial networks (GANs) were limited to differentiable data generation - notably images and videos. Recent advances allow text-generating GANs to be build. Lighter and easier to train than other solution capable of text generation, their capabilities remain unexpored as of now. The goal of this project would be implementing possible GAN architectures and evaluating their ability to generate texts with pre-set features - such as style or topics. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and ideally some experience with NLP. Contact Andrei Kucharavy for more information.

Semester Projects

If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project.

EPFL I&C duration, credits and workload information are available here. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.

Collaborative Projects

The lab is also collaborating with the industry and other labs at EPFL to offer interesting student projects motivated from real-world problems. With LARA and Interchain Foundation we have several projects:

  1. AT2: Integration of an asynchronous (consensus-less) payment system in the Cosmos Hub.
  2. Interblockchain Communication (IBC): Protocols description (and optional implementation) for enabling the inter-operation of independent blockchain applications.
  3. Stainless: Implementation of Tendermint modules (consensus, mempool, fast sync) using Stainless and Scala.
  4. Prusti: Implementation of Tendermint modules (consensus, mempool, fast sync) using Prusti and the Rust programming language.
  5. Mempool performance analysis and algorithm improvement.
  6. Adversarial engineering: Experimental evaluation of Tendermint in adversarial settings (e.g., in the style of Jepsen).
  7. Testing: Generation of tests out of specifications (TLA+ or Stainless) for the consensus module of Tendermint.
  8. Facebook Libra comparative research: Comparative analysis of consensus algorithms, specifically, between HotStuff (the consensus algorithm underlying Facebook's Libra) and Tendermint consensus.

Contact Adi Seredinschi (INR 327) if interested in learning more about these projects.