Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2020/11/24 12:02]
fablpd
education [2021/09/06 11:55]
fablpd
Line 8: Line 8:
 \\ \\
  
-  * [[education/​ca_2020|Concurrent Algorithms]] (theory & practice)+  * [[education/​ca_2021|Concurrent Algorithms]] (theory & practice)
   * [[education/​da|Distributed Algorithms]] (theory & practice)   * [[education/​da|Distributed Algorithms]] (theory & practice)
 \\ \\
Line 38: Line 38:
   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
  
 +  * **Distributed coordination using RDMA.** RDMA (Remote Direct Memory Access) allows accessing a remote machine'​s memory without interrupting its CPU. This technology is gaining traction over the last couple of years, as it allows for the creation of real-time distributed systems. RDMA allows for communication to take place close to the μsec scale, which enables the design and implementation of systems that process requests in only tens of μsec. Current research focuses on achieving real-time failure detection through a combination of novel algorithm design, latest hardware and linux kernel customization. Fast failure detection over RDMA brings the notion of availability to a new level, essentially allowing modern systems to enter the era of 7 nines of availability. Contact [[https://​people.epfl.ch/​athanasios.xygkis|Athanasios Xygkis]] and [[https://​people.epfl.ch/​antoine.murat|Antoine Murat]] for more information.
  
-  * **Distributed computing using RDMA and/or NVRAM.** RDMA (Remote Direct Memory Access) allows accessing a remote machine'​s memory without interrupting its CPU. NVRAM is byte-addressable persistent (non-volatile) memory with access times on the same order of magnitude as traditional (volatile) RAM. These two recent technologies pose novel challenges and raise new opportunities in distributed system design and implementation. Contact [[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] for more information. 
  
-  * **Robust Distributed Machine Learning**: With the proliferation of big datasets and models, Machine Learning is becoming distributed. Following the standard parameter server model, the learning phase is taken by two categories of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e., said Byzantine) affecting the model convergence in the learning phase. Our goal in this project is to build a system that is robust against Byzantine behavior of both parameter server and workers. Our first prototype, AggregaThor(https://​mlsys.org/​Conferences/​2019/​doc/​2019/​54.pdf),​ describes the first scalable robust Machine Learning framework. It fixed a severe vulnerability in TensorFlow and it showed how to make TensorFlow even faster, while robust. Contact [[https://​people.epfl.ch/​arsany.guirguis|Arsany Guirguis]] for more information.+ 
  
  
-  * **Consistency in global-scale storage systems**: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to [[http://​retwis.redis.io/​|Retwis]],​ or [[https://​github.com/​share/​sharejs|ShareJS]]) to recommender systems, static content storage services (à la [[https://​www.usenix.org/​legacy/​event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​s Haystack]]),​ or experimenting with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adi Seredinschi]] or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]] ​ for further information.+\\
  
 +  * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
  
 \\ \\
- * **Theory of evolution to improve GAN training**: Generative adversarial networks (GANs) have achieved some spectacular results in the six years since they have been introduced. However, their training process is still fraught with issues such as mode collapse, non-convegence or gradients collapse. Those issues have still not been completely resolved, making multiple restarts and selection of well-performing Generator-Discriminator pairs part of GAN training process. The process of adversarial generator-discriminator training is not dissimilar from co-evolution of two adversarial species - such as for instance hosts and pathogens, except that rounds of mutation/​recombination/​selection in search of fitness optimum are replaced by gradient descent. Our goal is to investigate - both experimentally and theoretically - if we can further stabilize and improve GAN training with evolutionary mechanisms, such as speciation, aneuploidization,​ neutral variability buffering or meta-evolutionary mechanisms. This work wold have implications on developing efficient solutions to detecting GAN products (aka deepfakes). You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and ideally you should have some knowledge of population genetics. ​ Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information. 
- 
  
 ===== Semester Projects ===== ===== Semester Projects =====
Line 57: Line 56:
 EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period. EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.
  
-===== Collaborative Projects ===== 
- 
-The lab is also collaborating with the industry and other labs at EPFL to offer interesting student projects motivated from real-world problems. With [[http://​lara.epfl.ch|LARA]] and [[interchain.io|Interchain Foundation]] we have several projects: 
- 
-  - **[[https://​dcl.epfl.ch/​site/​cryptocurrencies|AT2]]:​** Integration of an asynchronous (consensus-less) payment system in the Cosmos Hub. 
-  - **[[https://​github.com/​cosmos/​ics/​tree/​master/​ibc|Interblockchain Communication (IBC)]]:** Protocols description (and optional implementation) for enabling the inter-operation of independent blockchain applications. 
-  - **[[http://​stainless.epfl.ch|Stainless]]**:​ Implementation of Tendermint modules (consensus, mempool, fast sync) using Stainless and Scala. 
-  - **[[https://​github.com/​viperproject/​prusti-dev|Prusti]]:​** Implementation of Tendermint modules (consensus, mempool, fast sync) using Prusti and the Rust programming language. 
-  - **[[https://​tendermint.com/​docs/​spec/​reactors/​mempool/​functionality.html#​mempool-functionality|Mempool]]** performance analysis and algorithm improvement. 
-  - **Adversarial engineering:​** Experimental evaluation of Tendermint in adversarial settings (e.g., in the style of [[http://​jepsen.io/​analyses/​tendermint-0-10-2|Jepsen]]). 
-  - **Testing**:​ Generation of tests out of specifications (TLA+ or Stainless) for the consensus module of Tendermint. 
-  - **Facebook Libra comparative research**: Comparative analysis of consensus algorithms, specifically,​ between HotStuff (the consensus algorithm underlying [[https://​cryptorating.eu/​whitepapers/​Libra/​libra-consensus-state-machine-replication-in-the-libra-blockchain.pdf|Facebook'​s Libra]]) and Tendermint consensus. 
  
-Contact [[adi@interchain.io|Adi Seredinschi]] (INR 327) if interested in learning more about these projects.