Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2021/09/06 11:55]
fablpd
education [2021/10/08 12:48]
fablpd
Line 32: Line 32:
   * **Making Blockchain Accountable**:​ Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.   * **Making Blockchain Accountable**:​ Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.
  
-  * **GAR performances on different datasets**: Robust ​machine learning ​on textual data and content recommendation is critical for the safety ​of social media users (harassment, hate speech, etc.), but also for the reliability of scientific use of natural language processing such for processing computer programs, chemistry and drug discoveryText datasets are known to have long-tailed distributions,​ which poses specific challenges for robustness, while content recommendation datasets may feature clusters ​of similar users. The goal of this project is to better understand ​the properties ​of different ​datasets, and what makes a gradient aggregation rule (e.g. Krum, trimmed mean...) better than another, given a specific text dataset (conversational chatbots, translation,​ github code etc.). Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]]  for more information.+  * **Robust mean estimation**: In recent years, many algorithms have been proposed to perform robust mean estimation, which has been shown to be equivalent to robust gradient-based ​machine learning. A new concept has been proposed to define ​the performance ​of a robust mean estimator, called the averaging constant ​(along with the Byzantine resilience). This research project consists ​of computing ​the theoretical averaging constant ​of different ​proposed robust mean estimators, and to study their empirical performances on randomly generated vectors. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
  
-  ​* **Strategyproof ​collaborative ​filtering**: In collaborative ​filtering, other users' inputs are used to generalize ​the preferences ​of a given user. Such an approach has been critical ​to improve performance. However, ​it exposes ​each user to being manipulated ​by the inputs of malicious userswhich is arguably currently occurring ​on social mediasIn this theoretical projectwe search for Byzantine-resilient ​and strategyproof ​learning algorithms to perform something akin to collaborative filtering. This would also have important applications ​for implicit ​voting ​systems ​on exponential-size decision sets. Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]] for more information.+ 
 +  ​* **Accelerate Byzantine ​collaborative ​learning**: Our recent NeurIPS paper proposed algorithms for collaborative ​machine learning in the presence ​of Byzantine nodes, which have been proved to be near optimal with respect ​to optimality at convergence. However, ​these algorithms require all-to-all communication at every round, which is suboptimal. This research consists of designing a practical solution to Byzantine collaborative learning, based on the idea of a random communication network at each round, with both theoretical guarantees and practical implementation. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information. 
 + 
 +  * **Decentralize Tournesol’s learning algorithms**:​ The Tournesol platform leverages the contributions of its community of contributors ​to assign a « should be more recommended » score to YouTube videos rated by the contributorsusing a learning algorithm. Currently, the computations are performed ​on a central serverBut as Tournesol’s user base grows, and as more sophisticated ​learning algorithms ​are considered for deployment, there is a growing need to decentralize the computations of the learning algorithm. This project aims to build a framework, which will enable Tournesol users to run part of the computations of Tournesol’s scores directly in their browsers. Contact [[https://​people.epfl.ch/​le.hoang/?​lang=en|Lê Nguyên Hoang]] for more information. 
 + 
 +  * **Listening to the silent majority**: Vanilla machine learning from user-generated data inevitably favors those who generated the most amounts of data. But this means that learning algorithms will be optimized for these users, rather than for the silent majority. This research aims to correct for this bias, by trying to infer what data the majority ​would have likely generated, and by inferring what the models would have learned if the silent majority’s data was included in the training of the models. It involves both designing algorithms, proving correctness and implementing them. This research is motivated by the Tournesol project. Contact [[https://​people.epfl.ch/​le.hoang/?​lang=en|Lê Nguyên Hoang]] ​for more information. 
 + 
 +  * **Should experts be given more voting ​rights?**: This is a question that Condorcet tackled in 1785, through what is now known as the jury problem. However, his model was crude and does not apply to many critical problems, e.g. determining if a video on vaccines should be largely recommended. This research aims to better understand how voting rights should be allocated, based not only on how likely voters are to be correct, but also on the correlations between the voters’ judgments. So far, it involves mostly a theoretical analysis. This research is motivated by the Tournesol project. Contact [[https://​people.epfl.ch/​le.hoang/?lang=en|Lê Nguyên Hoang]] for more information.
  
   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
Line 47: Line 54:
  
   * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.   * **Byzantine-resilient heterogeneous GANs**: Byzantine-resilient federated learning has emerged as a major theme over the last couple of years, in grand part due to the need to distribute machine learning across many nodes, due to performance and privacy concerns. Until now it has focused on training a single model across many workers and many parameter serves. While this approach has brought on formidable results - including in GAN training, the topic of efficient, distributed and byzantine-resilient training of heterogeneous architectures remain relatively unexplored. In the context of Generative adversarial networks (GANs), such learning is critical to training light discriminators that can specialize in detecting specific featuers of generator-generated images. The goal of this project will be to investigate the potential for GAN training process poisonning by malicious discriminators and generators and investigate efficient protocols to ensure the training process robustness. You will need to have experience with scientific computing in Python, ideally with PyTorch experience, and notions of distributed computing. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information.
 +\\
 +
 +  * **GANs with Transformers**:​ Since their introduction in 2017, the Transformer architecture revolutionized the NLP machine learning models. Thanks to the scalability of self-attention only architectures,​ the models can now scale into trillions of parameters, allowing human-like capacities of text generation. However, they are not without their own shortcomings,​ notably due to their max-likelihood training mode over data that contains potentially undesirable statistical associations. An alternative approach to generative learning - Generative Adversarial Networks (GANs) - perform remarkably well when it comes to images, but have until recently struggled with texts, due to their sequential and discrete nature that is not compatible with gradient back-propagation they need to train. Some of those issues have been solved, but a major one - their scalability due to usage of RNNs instead of pure self-attention architectures. Previously, we were able to show that it is impossible to trivially replace RNN layers with Transformer layers (https://​arxiv.org/​abs/​2108.12275,​ presented in RANLP2021). This project will be building on those results and attempting to create stable Transformer-based Text GANs based on the tricks known to stabilize Transformer training or to attempt to theoretically demonstrate the inherent instability of Transformer-derived architectures in adversarial regime. You will need a solid background knowledge of linear algebra, acquaintance with the theory of machine learning, specifically neural networks, as well as experience with scientific computing in Python, ideally with PyTorch experience. Experience with NLP desirable, but not required.
 +
  
 \\ \\