Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2019/05/27 12:10]
fablpd
education [2021/10/08 12:47]
fablpd
Line 8: Line 8:
 \\ \\
  
-  * [[education/​ca_2018|Concurrent Algorithms]] (theory & practice)+  * [[education/​ca_2021|Concurrent Algorithms]] (theory & practice)
   * [[education/​da|Distributed Algorithms]] (theory & practice)   * [[education/​da|Distributed Algorithms]] (theory & practice)
 \\ \\
Line 26: Line 26:
 DCL offers master projects in the following areas: DCL offers master projects in the following areas:
  
-  * **Probabilistic Byzantine Resilience**:  ​Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical ​project, focused the correctness ​of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused ​on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.+  * **[[cryptocurrencies|Cryptocurrencies]]**: We have several ​project ​openings as part of our ongoing research ​on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].
  
 +  * **On the design and implementation of scalable and secure blockchain algorithms**:​ Consensus has recently gained in popularity with the advent of blockchain technologies. Unfortunately,​ most blockchains do not scale due, in part, to their centralized (leader-based) limitation. We recently designed a promising fully decentralised (leader-less) algorithm that promises to scale to large networks. The goal of this project is to implement it in rust and compare its performance on AWS instances against a traditional leader-based alternative like BFT-Smart whose code will be provided. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.
  
-  * **Distributed computing using RDMA and/or NVRAM**: contact ​[[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] for more information.+  * **Making Blockchain Accountable**: Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact ​[[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.
  
-  * **[[Distributed ML|Distributed Machine Learning]]**: contact ​[[http://​people.epfl.ch/​georgios.damaskinos|Georgios Damaskinos]] for more information.+  * **Robust mean estimation**: In recent years, many algorithms have been proposed to perform robust mean estimation, which has been shown to be equivalent to robust gradient-based machine learning. A new concept has been proposed to define the performance of a robust mean estimator, called the averaging constant (along with the Byzantine resilience). This research project consists of computing the theoretical averaging constant of different proposed robust mean estimators, and to study their empirical performances on randomly generated vectors. Contact ​[[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
  
-  * **Robust Distributed Machine Learning**: With the proliferation of big datasets and models, Machine Learning is becoming distributed. Following the standard parameter server model, the learning phase is taken by two categories of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e., said Byzantine) affecting the model convergence in the learning phase. Our goal in this project is to build a system that is robust against Byzantine behavior of both parameter server and workers. Our first prototype, AggregaThor(https://​www.sysml.cc/​doc/​2019/​54.pdf),​ describes the first scalable robust Machine Learning framework. It fixed a severe vulnerability in TensorFlow and it showed how to make TensorFlow even faster, while robust. Contact [[https://​people.epfl.ch/​arsany.guirguis|Arsany Guirguis]] or [[https://​people.epfl.ch/​sebastien.rouault|Sébastien Rouault]] for more information. 
  
-  * **Stochastic gradient: (artificial) reduction of the ratio variance/​norm for adversarial distributed SGD**: One computationally-efficient and non-intrusive line of defense ​for adversarial distributed SGD (e.g. 1 parameter server distributing ​the gradient estimation to severalpossibly adversarial workers) relies on the honest workers ​to send back gradient estimations ​with sufficiently low variance; assumption which is sometimes hard to satisfy in practiceOne solution could be to (drastically) increase the batch-size at the workersbut doing so may as well defeat the very purpose ​of distributing the computation. In this project, we propose two approaches that you can choose to explore (also you may propose ​different approach) ​to (artificially) reduce the ratio variance/​norm of the stochastic gradientswhile keeping ​the benefits ​of the distribution. The first proposed approachspeculative,​ boils down to "​intelligent"​ coordinate selection. +  * **Accelerate Byzantine collaborative learning**: Our recent NeurIPS paper proposed algorithms ​for collaborative machine learning in the presence of Byzantine nodeswhich have been proved ​to be near optimal ​with respect ​to optimality at convergenceHowever, these algorithms require all-to-all communication ​at every roundwhich is suboptimal. This research consists ​of designing ​practical solution ​to Byzantine collaborative learningbased on the idea of a random communication network at each roundwith both theoretical guarantees and practical implementationContact [[https://people.epfl.ch/sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
-The second makes use of some kind of "​momentum"​ at the workers. +
-[1] "​Machine Learning with Adversaries:​ Byzantine Tolerant Gradient Descent"​ (https://papers.nips.cc/paper/​6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent) +
-[2"​Federated Learning: Strategies ​for Improving Communication Efficiency"​ (https://​arxiv.org/​abs/​1610.05492)+
  
-  * **Consistency in global-scale storage systems**: We offer several projects in the context ​of storage systems, ranging from implementation ​of social applications (similar ​to [[http://​retwis.redis.io/​|Retwis]]or [[https://​github.com/​share/​sharejs|ShareJS]]) to recommender systemsstatic content storage services (à la [[https://​www.usenix.org/​legacy/​event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​Haystack]])or experimenting with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adrian Seredinschi]] or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]]  for further ​information.+  * **Decentralize Tournesol’s learning algorithms**: The Tournesol platform leverages ​the contributions ​of its community ​of contributors ​to assign a « should be more recommended » score to YouTube videos rated by the contributorsusing a learning algorithmCurrentlythe computations are performed on a central serverBut as Tournesol’user base growsand as more sophisticated learning algorithms are considered for deployment, there is a growing need to decentralize the computations of the learning algorithmThis project aims to build a framework, which will enable Tournesol users to run part of the computations of Tournesol’s scores directly in their browsersContact ​[[https://​people.epfl.ch/​le.hoang/?​lang=en|Lê Nguyên Hoang]] for more information.
  
 +  * **Listening to the silent majority**: Vanilla machine learning from user-generated data inevitably favors those who generated the most amounts of data. But this means that learning algorithms will be optimized for these users, rather than for the silent majority. This research aims to correct for this bias, by trying to infer what data the majority would have likely generated, and by inferring what the models would have learned if the silent majority’s data was included in the training of the models. It involves both designing algorithms, proving correctness and implementing them. This research is motivated by the Tournesol project. Contact Lê Nguyên Hoang for more information.
 +
 +  * **Should experts be given more voting rights?**: This is a question that Condorcet tackled in 1785, through what is now known as the jury problem. However, his model was crude and does not apply to many critical problems, e.g. determining if a video on vaccines should be largely recommended. This research aims to better understand how voting rights should be allocated, based not only on how likely voters are to be correct, but also on the correlations between the voters’ judgments. So far, it involves mostly a theoretical analysis. This research is motivated by the Tournesol project. Contact Lê Nguyên Hoang for more information.
 +
 +  * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
 +
 +  * **Distributed coordination using RDMA.** RDMA (Remote Direct Memory Access) allows accessing a remote machine'​s memory without interrupting its CPU. This technology is gaining traction over the last couple of years, as it allows for the creation of real-time distributed systems. RDMA allows for communication to take place close to the μsec scale, which enables the design and implementation of systems that process requests in only tens of μsec. Current research focuses on achieving real-time failure detection through a combination of novel algorithm design, latest hardware and linux kernel customization. Fast failure detection over RDMA brings the notion of availability to a new level, essentially allowing modern systems to enter the era of 7 nines of availability. Contact [[https://​people.epfl.ch/​athanasios.xygkis|Athanasios Xygkis]] and [[https://​people.epfl.ch/​antoine.murat|Antoine Murat]] for more information.
 +
 +
 + 
  
  
 \\ \\
  
 +  * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
 +\\
 +
 +  * **GANs with Transformers**:​ Since their introduction in 2017, the Transformer architecture revolutionized the NLP machine learning models. Thanks to the scalability of self-attention only architectures,​ the models can now scale into trillions of parameters, allowing human-like capacities of text generation. However, they are not without their own shortcomings,​ notably due to their max-likelihood training mode over data that contains potentially undesirable statistical associations. An alternative approach to generative learning - Generative Adversarial Networks (GANs) - perform remarkably well when it comes to images, but have until recently struggled with texts, due to their sequential and discrete nature that is not compatible with gradient back-propagation they need to train. Some of those issues have been solved, but a major one - their scalability due to usage of RNNs instead of pure self-attention architectures. Previously, we were able to show that it is impossible to trivially replace RNN layers with Transformer layers (https://​arxiv.org/​abs/​2108.12275,​ presented in RANLP2021). This project will be building on those results and attempting to create stable Transformer-based Text GANs based on the tricks known to stabilize Transformer training or to attempt to theoretically demonstrate the inherent instability of Transformer-derived architectures in adversarial regime. You will need a solid background knowledge of linear algebra, acquaintance with the theory of machine learning, specifically neural networks, as well as experience with scientific computing in Python, ideally with PyTorch experience. Experience with NLP desirable, but not required.
 +
 +
 +\\
  
 ===== Semester Projects ===== ===== Semester Projects =====
Line 52: Line 66:
  
 EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period. EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.
 +
 +