Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2021/09/08 09:59]
fablpd
education [2022/02/25 11:00]
fablpd
Line 32: Line 32:
   * **Making Blockchain Accountable**:​ Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.   * **Making Blockchain Accountable**:​ Abstract: One of the key drawback of blockchain is its lack of accountability. In fact, it does not hold participants responsible for their actions. This is easy to see as a malicious or Byzantine user typically double spends in a branch of blocks that disappears from the system, hence remaining undetected. Accountability is often thought to be communication costly: to detect a malicious participants who has sent deceitful messages to different honest participants for them to disagree, one may be tempted to force each honest participant to exchange all the messages they receive and cross-check them. However, we have recently designed an algorithm that shares the same communication complexity as the current consensus algorithms of existing blockchains. The goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against a baseline implementation. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.
  
-  * **GAR performances on different datasets**: Robust ​machine learning ​on textual data and content recommendation is critical for the safety ​of social media users (harassmenthate speech, etc.), but also for the reliability of scientific use of natural language processing such for processing computer programs, chemistry and drug discoveryText datasets are known to have long-tailed distributions,​ which poses specific challenges for robustness, while content recommendation datasets may feature clusters ​of similar users. The goal of this project is to better understand ​the properties ​of different ​datasets, and what makes a gradient aggregation rule (e.g. Krum, trimmed mean...) better than another, given a specific text dataset (conversational chatbots, translation,​ github code etc.). Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]]  for more information.+  * **Robust mean estimation**: In recent years, many algorithms have been proposed to perform robust mean estimation, which has been shown to be equivalent to robust gradient-based ​machine learning. A new concept has been proposed to define ​the performance ​of a robust mean estimatorcalled the [[https://​arxiv.org/​abs/​2008.00742|averaging constant]] (along with the Byzantine resilience). This research project consists ​of computing ​the theoretical averaging constant ​of different ​proposed robust mean estimators, and to study their empirical performances on randomly generated vectors. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
  
-  ​* **Strategyproof ​collaborative ​filtering**: In collaborative ​filtering, other users' inputs are used to generalize ​the preferences ​of a given user. Such an approach has been critical ​to improve performance. However, ​it exposes ​each user to being manipulated ​by the inputs of malicious userswhich is arguably currently occurring ​on social mediasIn this theoretical projectwe search for Byzantine-resilient ​and strategyproof ​learning algorithms to perform something akin to collaborative filtering. This would also have important applications ​for implicit ​voting ​systems ​on exponential-size decision sets. Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]] for more information.+ 
 +  ​* **Accelerate Byzantine ​collaborative ​learning**: [[https://​arxiv.org/​abs/​2008.00742|Our recent NeurIPS paper]] proposed algorithms for collaborative ​machine learning in the presence ​of Byzantine nodes, which have been proved to be near optimal with respect ​to optimality at convergence. However, ​these algorithms require all-to-all communication at every round, which is suboptimal. This research consists of designing a practical solution to Byzantine collaborative learning, based on the idea of a random communication network at each round, with both theoretical guarantees and practical implementation. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information. 
 + 
 +  * **Decentralize Tournesol’s learning algorithms**:​ The [[https://​tournesol.app/​|Tournesol platform]] leverages the contributions of its community of contributors ​to assign a « should be more recommended » score to YouTube videos rated by the contributorsusing a learning algorithm. Currently, the computations are performed ​on a central serverBut as Tournesol’s user base grows, and as more sophisticated ​learning algorithms ​are considered for deployment, there is a growing need to decentralize the computations of the learning algorithm. This project aims to build a framework, which will enable Tournesol users to run part of the computations of Tournesol’s scores directly in their browsers. Contact [[https://​people.epfl.ch/​le.hoang/?​lang=en|Lê Nguyên Hoang]] for more information. 
 + 
 +  * **Listening to the silent majority**: Vanilla machine learning from user-generated data inevitably favors those who generated the most amounts of data. But this means that learning algorithms will be optimized for these users, rather than for the silent majority. This research aims to correct for this bias, by trying to infer what data the majority ​would have likely generated, and by inferring what the models would have learned if the silent majority’s data was included in the training of the models. It involves both designing algorithms, proving correctness and implementing them. This research is motivated by the [[https://​tournesol.app/​|Tournesol project]]. Contact [[https://​people.epfl.ch/​le.hoang/?​lang=en|Lê Nguyên Hoang]] ​for more information. 
 + 
 +  * **Should experts be given more voting ​rights?**: This is a question that Condorcet tackled in 1785, through what is now known as the jury problem. However, his model was crude and does not apply to many critical problems, e.g. determining if a video on vaccines should be largely recommended. This research aims to better understand how voting rights should be allocated, based not only on how likely voters are to be correct, but also on the correlations between the voters’ judgments. So far, it involves mostly a theoretical analysis. This research is motivated by the [[https://​tournesol.app/​|Tournesol project]]. Contact [[https://​people.epfl.ch/​le.hoang/?lang=en|Lê Nguyên Hoang]] for more information.
  
   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
Line 49: Line 56:
 \\ \\
  
-  * **GANs with Transformers**: Since their introduction in 2017, the Transformer architecture +  * **Hijacking proof-of-work to make it useful: distributed gradient-free learning approach** 
-revolutionized the NLP machine learning models. Thanks to the + 
-scalability ​of self-attention only architectures, ​the models can now +Proof-of-work blockchains - notably Bitcoin and Ethereum - reach a probabilistic consensus about the contents ​of the blockchain by a mechanism ​of probabilistic leader electionEvery contributor to the consensus tries to solve a puzzleand the first one to succeed is elected a leaderallowed ​to create the next block and publicly add information ​to it. The puzzle needs to be hard to solve and easy to verify, solvable only by random guessing ​and not allowing for any shortcuts and allow for its difficulty ​to be tuned so that nodes don't find answers to it simultaneously and take different leaderships forking the chain in twoPartial cryptographic hash reversal has traditionally ​been a perfect candidate for such puzzle, but it has no interest outside being challenge for blockchain. And with 100-300 PetaFLOP/s (drawing 100 TWh/​y) ​of general purpose computational power being tied into Ethereum blockchain alone as of early 2022, the waste of computational resources and energy is colossal
-scale into trillions ​of parameters, allowing human-like capacities ​of + 
-text generationHoweverthey are not without their own shortcomings, +While the interest of blockchains and the suitability of proof-of-work as a mechanism ​to run them is widely debated, it's at this day the mechanism for the two largest ones. We try to at least use some of that challenge useful by injecting a "​try"​ step of a (1,λ)-ES evolutionary search algorithm into the hash computation loop, slowing it down and making it do something useful in during the slowdown period. This class of evolutionary search algorithm achieves a good performance ​on black-bock optimization tasks (sometimes exceeding RL approaches in traditionally RL problems), is embarrassingly parallel, fits well the requirements for a proof-of-work function ​and can be empirically optimized ​to minimize the waste of computational resources during a training run. 
-notably due to their max-likelihood training mode over data that + 
-contains potentially undesirable statistical associations. +However, in its current state the (1,λ)-ES-based ​useful proof-of-work has been proven ​to work in cases where the data used for the training ​tasks can be fully replicated among the nodes. For numerous applications,​ it is not an option. Finding ways to solve that problem, both from a theoretical and an experimental perspective will be the goal of this project.
-An alternative approach ​to generative learning - Generative Adversarial +
-Networks (GANs) - perform remarkably well when it comes to images, but +
-have until recently struggled with texts, due to their sequential ​and +
-discrete nature that is not compatible with gradient back-propagation +
-they need to trainSome of those issues have been solved, but a major +
-one their scalability due to usage of RNNs instead ​of pure +
-self-attention architectures+
-Previously, we were able to show that it is impossible ​to trivially +
-replace RNN layers with Transformer layers +
-(https://​arxiv.org/​abs/​2108.12275presented in RANLP2021). This project +
-will be building ​on those results ​and attempting ​to create stable +
-Transformer-based ​Text GANs based on the tricks known to stabilize +
-Transformer ​training ​or to attempt ​to theoretically demonstrate ​the +
-inherent instability ​of Transformer-derived architectures in adversarial +
-regime.+
  
-You will need a solid background knowledge of linear algebra, +You will need solid skills in Python (Rust and WebAssembly are plus)basic understanding of distributed algorithms and of machine learning ​conceptsSome familiarity ​with blockchains and black box optimization is a plus, but is not a requirement. Contact Andrei Kucharavy (andrei.kucharavy@epfl.ch) for more information.
-acquaintance with the theory ​of machine learning, specifically neural +
-networks, as well as experience with scientific computing in Python, +
-ideally with PyTorch experienceExperience ​with NLP desirable, but not +
-required.+