Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2021/01/04 15:28]
fablpd
education [2021/09/08 09:59]
fablpd
Line 8: Line 8:
 \\ \\
  
-  * [[education/​ca_2020|Concurrent Algorithms]] (theory & practice)+  * [[education/​ca_2021|Concurrent Algorithms]] (theory & practice)
   * [[education/​da|Distributed Algorithms]] (theory & practice)   * [[education/​da|Distributed Algorithms]] (theory & practice)
 \\ \\
Line 42: Line 42:
  
    
-  * **Consistency in global-scale storage systems**: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to [[http://​retwis.redis.io/​|Retwis]],​ or [[https://​github.com/​share/​sharejs|ShareJS]]) to recommender systems, static content storage services (à la [[https://​www.usenix.org/​legacy/​event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​s Haystack]]),​ or experimenting with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adi Seredinschi]] or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]] ​ for further information. 
  
  
Line 48: Line 47:
  
   * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.   * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
 +\\
 +
 +  * **GANs with Transformers**:​ Since their introduction in 2017, the Transformer architecture
 +revolutionized the NLP machine learning models. Thanks to the
 +scalability of self-attention only architectures,​ the models can now
 +scale into trillions of parameters, allowing human-like capacities of
 +text generation. However, they are not without their own shortcomings,​
 +notably due to their max-likelihood training mode over data that
 +contains potentially undesirable statistical associations.
 +An alternative approach to generative learning - Generative Adversarial
 +Networks (GANs) - perform remarkably well when it comes to images, but
 +have until recently struggled with texts, due to their sequential and
 +discrete nature that is not compatible with gradient back-propagation
 +they need to train. Some of those issues have been solved, but a major
 +one - their scalability due to usage of RNNs instead of pure
 +self-attention architectures.
 +Previously, we were able to show that it is impossible to trivially
 +replace RNN layers with Transformer layers
 +(https://​arxiv.org/​abs/​2108.12275,​ presented in RANLP2021). This project
 +will be building on those results and attempting to create stable
 +Transformer-based Text GANs based on the tricks known to stabilize
 +Transformer training or to attempt to theoretically demonstrate the
 +inherent instability of Transformer-derived architectures in adversarial
 +regime.
 +
 +You will need a solid background knowledge of linear algebra,
 +acquaintance with the theory of machine learning, specifically neural
 +networks, as well as experience with scientific computing in Python,
 +ideally with PyTorch experience. Experience with NLP desirable, but not
 +required.
 +
  
 \\ \\
Line 57: Line 87:
 EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period. EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.
  
-===== Collaborative Projects ===== 
- 
-The lab is also collaborating with the industry and other labs at EPFL to offer interesting student projects motivated from real-world problems. With [[http://​lara.epfl.ch|LARA]] and [[interchain.io|Interchain Foundation]] we have several projects: 
- 
-  - **[[https://​dcl.epfl.ch/​site/​cryptocurrencies|AT2]]:​** Integration of an asynchronous (consensus-less) payment system in the Cosmos Hub. 
-  - **[[https://​github.com/​cosmos/​ics/​tree/​master/​ibc|Interblockchain Communication (IBC)]]:** Protocols description (and optional implementation) for enabling the inter-operation of independent blockchain applications. 
-  - **[[http://​stainless.epfl.ch|Stainless]]**:​ Implementation of Tendermint modules (consensus, mempool, fast sync) using Stainless and Scala. 
-  - **[[https://​github.com/​viperproject/​prusti-dev|Prusti]]:​** Implementation of Tendermint modules (consensus, mempool, fast sync) using Prusti and the Rust programming language. 
-  - **[[https://​tendermint.com/​docs/​spec/​reactors/​mempool/​functionality.html#​mempool-functionality|Mempool]]** performance analysis and algorithm improvement. 
-  - **Adversarial engineering:​** Experimental evaluation of Tendermint in adversarial settings (e.g., in the style of [[http://​jepsen.io/​analyses/​tendermint-0-10-2|Jepsen]]). 
-  - **Testing**:​ Generation of tests out of specifications (TLA+ or Stainless) for the consensus module of Tendermint. 
-  - **Facebook Libra comparative research**: Comparative analysis of consensus algorithms, specifically,​ between HotStuff (the consensus algorithm underlying [[https://​cryptorating.eu/​whitepapers/​Libra/​libra-consensus-state-machine-replication-in-the-libra-blockchain.pdf|Facebook'​s Libra]]) and Tendermint consensus. 
  
-Contact [[adi@interchain.io|Adi Seredinschi]] (INR 327) if interested in learning more about these projects.