Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2019/06/02 14:59]
seredins
education [2021/09/08 09:59]
fablpd
Line 8: Line 8:
 \\ \\
  
-  * [[education/​ca_2018|Concurrent Algorithms]] (theory & practice)+  * [[education/​ca_2021|Concurrent Algorithms]] (theory & practice)
   * [[education/​da|Distributed Algorithms]] (theory & practice)   * [[education/​da|Distributed Algorithms]] (theory & practice)
 \\ \\
Line 26: Line 26:
 DCL offers master projects in the following areas: DCL offers master projects in the following areas:
  
-  * **Probabilistic Byzantine Resilience**:  ​Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical ​project, focused the correctness ​of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused ​on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.+  * **[[cryptocurrencies|Cryptocurrencies]]**: We have several ​project ​openings as part of our ongoing research ​on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].
  
 +  * **On the design and implementation of scalable and secure blockchain algorithms**:​ Consensus has recently gained in popularity with the advent of blockchain technologies. Unfortunately,​ most blockchains do not scale due, in part, to their centralized (leader-based) limitation. We recently designed a promising fully decentralised (leader-less) algorithm that promises to scale to large networks. The goal of this project is to implement it in rust and compare its performance on AWS instances against a traditional leader-based alternative like BFT-Smart whose code will be provided. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.
  
-  * **Distributed computing using RDMA and/or NVRAM**: contact ​[[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] for more information.+  * **Making Blockchain Accountable**: Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact ​[[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.
  
-  * **[[Distributed ML|Distributed Machine Learning]]**: contact ​[[http://​people.epfl.ch/​georgios.damaskinos|Georgios Damaskinos]] for more information.+  * **GAR performances on different datasets**: Robust machine learning on textual data and content recommendation is critical for the safety of social media users (harassment,​ hate speech, etc.), but also for the reliability of scientific use of natural language processing such for processing computer programs, chemistry and drug discovery. Text datasets are known to have long-tailed distributions,​ which poses specific challenges for robustness, while content recommendation datasets may feature clusters of similar users. The goal of this project is to better understand the properties of different datasets, and what makes a gradient aggregation rule (e.g. Krum, trimmed mean...) better than another, given a specific text dataset (conversational chatbots, translation,​ github code etc.). Contact ​[[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]]  for more information.
  
-  * **Robust Distributed Machine Learning**: With the proliferation ​of big datasets and models, Machine Learning is becoming distributedFollowing the standard parameter server model, the learning phase is taken by two categories ​of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e.said Byzantine) affecting the model convergence in the learning phaseOur goal in this project ​is to build a system that is robust against ​Byzantine ​behavior of both parameter server ​and workersOur first prototype, AggregaThor(https://​www.sysml.cc/​doc/​2019/​54.pdf),​ describes the first scalable robust Machine Learning framework. It fixed a severe vulnerability in TensorFlow and it showed how to make TensorFlow even faster, while robust. Contact [[https://​people.epfl.ch/​arsany.guirguis|Arsany Guirguis]] for more information.+  * **Strategyproof collaborative filtering**: In collaborative filtering, other users' inputs are used to generalize ​the preferences ​of a given userSuch an approach has been critical to improve performance. Howeverit exposes each user to being manipulated by the inputs ​of malicious userswhich is arguably currently occurring on social mediasIn this theoretical ​project, we search for Byzantine-resilient ​and strategyproof learning algorithms to perform something akin to collaborative filteringThis would also have important applications for implicit voting systems on exponential-size decision sets. Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]] for more information.
  
-  * **Stochastic gradient: (artificial) reduction of the ratio variance/​norm for adversarial distributed SGD**: One computationally-efficient and non-intrusive line of defense for adversarial ​distributed ​SGD (e.g. 1 parameter server distributing the gradient estimation to severalpossibly adversarial workers) relies ​on the honest workers to send back gradient estimations with sufficiently low variance; assumption which is sometimes hard to satisfy in practice. One solution could be to (drasticallyincrease the batch-size at the workersbut doing so may as well defeat ​the very purpose ​of distributing the computation. \\ In this project, we propose two approaches that you can choose to explore ​(also you may propose ​different approach) to (artificially) reduce the ratio variance/​norm of the stochastic gradientswhile keeping the benefits ​of the distributionThe first proposed approach, speculative,​ boils down to "​intelligent"​ coordinate selection. The second makes use of some kind of "​momentum"​ at the workers. \\ [1] [[https://​papers.nips.cc/​paper/​6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent|"​Machine Learning with Adversaries:​ Byzantine Tolerant Gradient Descent"​ ]]  \\ [2] [[https://​arxiv.org/​abs/​1610.05492|"​Federated Learning: Strategies for Improving Communication Efficiency"​]] \\ Contact ​ [[https://​people.epfl.ch/​sebastien.rouault|Sébastien Rouault]] for more information.+  * **Probabilistic Byzantine Resilience**:  ​Development of high-performance,​ Byzantine-resilient ​distributed ​systems with provable probabilistic guaranteesTwo options are currently availableboth building ​on previous work on probabilistic Byzantine broadcast: ​(ia theoretical projectfocused ​the correctness ​of probabilistic Byzantine-tolerant distributed algorithms; ​(ii) practical projectfocused on numerically evaluating ​of our theoretical resultsPlease contact ​[[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
  
 +  * **Distributed coordination using RDMA.** RDMA (Remote Direct Memory Access) allows accessing a remote machine'​s memory without interrupting its CPU. This technology is gaining traction over the last couple of years, as it allows for the creation of real-time distributed systems. RDMA allows for communication to take place close to the μsec scale, which enables the design and implementation of systems that process requests in only tens of μsec. Current research focuses on achieving real-time failure detection through a combination of novel algorithm design, latest hardware and linux kernel customization. Fast failure detection over RDMA brings the notion of availability to a new level, essentially allowing modern systems to enter the era of 7 nines of availability. Contact [[https://​people.epfl.ch/​athanasios.xygkis|Athanasios Xygkis]] and [[https://​people.epfl.ch/​antoine.murat|Antoine Murat]] for more information.
  
-  * **Consistency in global-scale storage systems**: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to [[http://​retwis.redis.io/​|Retwis]],​ or [[https://​github.com/​share/​sharejs|ShareJS]]) to recommender systems, static content storage services (à la [[https://​www.usenix.org/​legacy/​event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​s Haystack]]),​ or experimenting with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adrian Seredinschi]] or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]] ​ for further information. 
  
-  * **[[cryptocurrencies|Cryptocurrencies]]**:​ We have several project openings as part of our ongoing research on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].+ 
  
  
 +\\
  
 +  * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
 \\ \\
  
 +  * **GANs with Transformers**:​ Since their introduction in 2017, the Transformer architecture
 +revolutionized the NLP machine learning models. Thanks to the
 +scalability of self-attention only architectures,​ the models can now
 +scale into trillions of parameters, allowing human-like capacities of
 +text generation. However, they are not without their own shortcomings,​
 +notably due to their max-likelihood training mode over data that
 +contains potentially undesirable statistical associations.
 +An alternative approach to generative learning - Generative Adversarial
 +Networks (GANs) - perform remarkably well when it comes to images, but
 +have until recently struggled with texts, due to their sequential and
 +discrete nature that is not compatible with gradient back-propagation
 +they need to train. Some of those issues have been solved, but a major
 +one - their scalability due to usage of RNNs instead of pure
 +self-attention architectures.
 +Previously, we were able to show that it is impossible to trivially
 +replace RNN layers with Transformer layers
 +(https://​arxiv.org/​abs/​2108.12275,​ presented in RANLP2021). This project
 +will be building on those results and attempting to create stable
 +Transformer-based Text GANs based on the tricks known to stabilize
 +Transformer training or to attempt to theoretically demonstrate the
 +inherent instability of Transformer-derived architectures in adversarial
 +regime.
 +
 +You will need a solid background knowledge of linear algebra,
 +acquaintance with the theory of machine learning, specifically neural
 +networks, as well as experience with scientific computing in Python,
 +ideally with PyTorch experience. Experience with NLP desirable, but not
 +required.
 +
 +
 +\\
  
 ===== Semester Projects ===== ===== Semester Projects =====
Line 52: Line 86:
  
 EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period. EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.
 +
 +