Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2019/05/27 12:10]
fablpd
education [2022/04/19 17:49]
fablpd
Line 8: Line 8:
 \\ \\
  
-  * [[education/​ca_2018|Concurrent Algorithms]] (theory & practice)+  * [[education/​ca_2021|Concurrent Algorithms]] (theory & practice)
   * [[education/​da|Distributed Algorithms]] (theory & practice)   * [[education/​da|Distributed Algorithms]] (theory & practice)
 \\ \\
Line 26: Line 26:
 DCL offers master projects in the following areas: DCL offers master projects in the following areas:
  
-  * **Probabilistic Byzantine Resilience**:  ​Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical ​project, focused the correctness ​of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused ​on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.+  * **[[cryptocurrencies|Cryptocurrencies]]**: We have several ​project ​openings as part of our ongoing research ​on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].
  
  
-  * **Distributed computing using RDMA and/or NVRAM**: contact [[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] ​for more information.+  * **Decentralized authentication for cryptocurrencies**: Current cryptocurrency systems use expensive cryptographic operations to authenticate usersThese heavy computations limit the number of users and operations a system can serve concurrently which prevents it to scale. Our recent research shows that we can use a decentralized authentication algorithm to bypass the cryptographic bottleneck and make cryptocurrency systems faster and more available. This is a practical project which requires good knowledge of network programming,​ preferrably Rust otherwise C++, and of the basics of cryptography (hashing functions, asymmetric cryptography). Preferred skills include distributed algorithms and more advanced cryptography such as BLS signatures. Contact Pierre-Louis Roman <​pierre-louis.roman@epfl.chfor more information.
  
-  * **[[Distributed ML|Distributed Machine Learning]]**: contact [[http://​people.epfl.ch/​georgios.damaskinos|Georgios Damaskinos]] ​for more information.+  * **Topology-aware mempool for cryptocurrencies**: The mempool is a core component of cryptocurrency systemsIt disseminates user transactions to the miner nodes before they reach consensus.Current mempools assume an homogeneous network topology where all machines have the same bandwidth and latency.This unrealitic assumption forces the system to progress at the same speed as the slowest node in the system. This project aims at implementing a mempool which exploits the heterogeneity of the network to speed up data dissemination for cryptocurrency systems. This is a practical project which requires good knowledge in network programming,​ either Go or C++, distributed algorithms. Contact Gauthier Voron <​gauthier.voron@epfl.chfor more information.
  
-  * **Robust ​Distributed Machine Learning**: With the proliferation of big datasets and modelsMachine Learning is becoming distributed. Following the standard parameter server modelthe learning ​phase is taken by two categories of machines: parameter servers and workersAny of these machines could behave arbitrarily (i.e., said Byzantine) affecting the model convergence in the learning phase. Our goal in this project is to build system that is robust ​against Byzantine behavior of both parameter server and workers. Our first prototypeAggregaThor(https://www.sysml.cc/doc/2019/54.pdf), describes ​the first scalable ​robust ​Machine Learning framework. It fixed a severe vulnerability in TensorFlow ​and it showed how to make TensorFlow even faster, while robust. Contact [[https://​people.epfl.ch/​arsany.guirguis|Arsany Guirguis]] or [[https://​people.epfl.ch/​sebastien.rouault|Sébastien Rouault]] for more information.+  * **Robust ​mean estimation**: In recent yearsmany algorithms have been proposed to perform robust mean estimationwhich has been shown to be equivalent to robust gradient-based machine ​learning. ​A new concept has been proposed to define the performance ​of a robust ​mean estimatorcalled the [[https://arxiv.org/abs/2008.00742|averaging constant]] (along with the Byzantine resilience). This research project consists of computing ​the theoretical averaging constant of different proposed ​robust ​mean estimators, ​and to study their empirical performances on randomly generated vectors. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
  
-  * **Stochastic gradient: (artificial) reduction of the ratio variance/​norm for adversarial distributed SGD**: One computationally-efficient and non-intrusive line of defense for adversarial distributed SGD (e.g. 1 parameter server distributing the gradient estimation to several, possibly adversarial workers) relies on the honest workers to send back gradient estimations with sufficiently low variance; assumption which is sometimes hard to satisfy in practice. One solution could be to (drastically) increase the batch-size at the workers, but doing so may as well defeat the very purpose of distributing the computation. In this project, we propose two approaches that you can choose to explore (also you may propose a different approach) to (artificially) reduce the ratio variance/​norm of the stochastic gradients, while keeping the benefits of the distribution. The first proposed approach, speculative,​ boils down to "​intelligent"​ coordinate selection. The second makes use of some kind of "​momentum"​ at the workers. 
-[1] "​Machine Learning with Adversaries:​ Byzantine Tolerant Gradient Descent"​ (https://​papers.nips.cc/​paper/​6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent) 
-[2] "​Federated Learning: Strategies for Improving Communication Efficiency"​ (https://​arxiv.org/​abs/​1610.05492) 
  
-  * **Consistency in global-scale storage systems**: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to [[http://​retwis.redis.io/​|Retwis]],​ or [[https://github.com/​share/​sharejs|ShareJS]]) to recommender systems, static content storage services (à la [[https://​www.usenix.org/legacy/event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​s Haystack]])or experimenting ​with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adrian Seredinschi]] or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]]  for further ​information.+  * **Accelerate Byzantine collaborative learning**: [[https://arxiv.org/abs/2008.00742|Our recent NeurIPS paper]] proposed algorithms for collaborative machine learning in the presence of Byzantine nodeswhich have been proved to be near optimal ​with respect to optimality at convergence. However, these algorithms require all-to-all communication at every round, which is suboptimalThis research consists of designing a practical solution to Byzantine collaborative learning, based on the idea of a random communication network at each round, with both theoretical guarantees and practical implementationContact ​[[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
  
 +
 +
 +  * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
 +
 +  * **Distributed coordination using RDMA.** RDMA (Remote Direct Memory Access) allows accessing a remote machine'​s memory without interrupting its CPU. This technology is gaining traction over the last couple of years, as it allows for the creation of real-time distributed systems. RDMA allows for communication to take place close to the μsec scale, which enables the design and implementation of systems that process requests in only tens of μsec. Current research focuses on achieving real-time failure detection through a combination of novel algorithm design, latest hardware and linux kernel customization. Fast failure detection over RDMA brings the notion of availability to a new level, essentially allowing modern systems to enter the era of 7 nines of availability. Contact [[https://​people.epfl.ch/​athanasios.xygkis|Athanasios Xygkis]] and [[https://​people.epfl.ch/​antoine.murat|Antoine Murat]] for more information.
 +
 +
 + 
  
  
 \\ \\
  
 +  * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
 +\\
 +
 +  * **Hijacking proof-of-work to make it useful: distributed gradient-free learning approach**: Proof-of-work blockchains - notably Bitcoin and Ethereum - reach a probabilistic consensus about the contents of the blockchain by a mechanism of probabilistic leader election. Every contributor to the consensus tries to solve a puzzle, and the first one to succeed is elected a leader, allowed to create the next block and publicly add information to it. The puzzle needs to be hard to solve and easy to verify, solvable only by random guessing and not allowing for any shortcuts and allow for its difficulty to be tuned so that nodes don't find answers to it simultaneously and take different leaderships forking the chain in two. Partial cryptographic hash reversal has traditionally been a perfect candidate for such puzzle, but it has no interest outside being a challenge for blockchain. And with 100-300 PetaFLOP/s (drawing 100 TWh/y) of general purpose computational power being tied into Ethereum blockchain alone as of early 2022, the waste of computational resources and energy is colossal. While the interest of blockchains and the suitability of proof-of-work as a mechanism to run them is widely debated, it's at this day the mechanism for the two largest ones. We try to at least use some of that challenge useful by injecting a "​try"​ step of a (1,λ)-ES evolutionary search algorithm into the hash computation loop, slowing it down and making it do something useful in during the slowdown period. This class of evolutionary search algorithm achieves a good performance on black-bock optimization tasks (sometimes exceeding RL approaches in traditionally RL problems), is embarrassingly parallel, fits well the requirements for a proof-of-work function and can be empirically optimized to minimize the waste of computational resources during a training run. However, in its current state the (1,​λ)-ES-based useful proof-of-work has been proven to work in cases where the data used for the training tasks can be fully replicated among the nodes. For numerous applications,​ it is not an option. Finding ways to solve that problem, both from a theoretical and an experimental perspective will be the goal of this project. You will need solid skills in Python (Rust and WebAssembly are a plus), basic understanding of distributed algorithms and of machine learning concepts. Some familiarity with blockchains and black box optimization is a plus, but is not a requirement. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
 +\\
 +
 +\\
  
 ===== Semester Projects ===== ===== Semester Projects =====
Line 51: Line 63:
  
 EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period. EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.
 +
 +