Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2021/09/08 10:00]
fablpd
education [2022/04/19 17:48]
fablpd
Line 28: Line 28:
   * **[[cryptocurrencies|Cryptocurrencies]]**:​ We have several project openings as part of our ongoing research on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].   * **[[cryptocurrencies|Cryptocurrencies]]**:​ We have several project openings as part of our ongoing research on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].
  
-  * **On the design and implementation of scalable and secure blockchain algorithms**:​ Consensus has recently gained in popularity with the advent of blockchain technologies. Unfortunately,​ most blockchains do not scale due, in part, to their centralized (leader-based) limitation. We recently designed a promising fully decentralised (leader-less) algorithm that promises to scale to large networks. The goal of this project is to implement it in rust and compare its performance on AWS instances against a traditional leader-based alternative like BFT-Smart whose code will be provided. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information. 
  
-  * **Making Blockchain Accountable**: Abstract: One of the key drawback ​of blockchain ​is its lack of accountability. In factit does not hold participants responsible ​for their actionsThis is easy to see as malicious or Byzantine ​user typically double spends ​in a branch ​of blocks that disappears from the system, hence remaining undetectedAccountability ​is often thought ​to be communication costly: ​to detect a malicious participants who has sent deceitful messages ​to different ​honest participants for them to disagreeone may be tempted ​to force each honest participant to exchange all the messages they receive and cross-check them. However, ​we have recently designed an algorithm that shares the same communication complexity as the current consensus ​algorithms ​of existing blockchainsThe goal of this project is to make blockchains accountable by implementing this accountable consensus algorithm and comparing it on a distributed set of machines against ​baseline ​implementation. Contact [[https://​people.epfl.ch/​vincent.gramoli|Vincent Gramoli]] for more information.+  * **Decentralized authentication for cryptocurrencies**: Current cryptocurrency systems use expensive cryptographic operations to authenticate users. These heavy computations limit the number ​of users and operations a system can serve concurrently which prevents it to scale. Our recent research shows that we can use a decentralized authentication algorithm to bypass the cryptographic bottleneck and make cryptocurrency systems faster and more available. This is a practical project which requires good knowledge ​of network programmingpreferrably Rust otherwise C++, and of the basics of cryptography (hashing functions, asymmetric cryptography). Preferred skills include distributed algorithms and more advanced cryptography such as BLS signatures. Contact Pierre-Louis Roman <​pierre-louis.roman@epfl.ch> ​for more information. 
 + 
 +  * **Topology-aware mempool for cryptocurrencies**:​The mempool ​is a core component of cryptocurrency systems. It disseminates ​user transactions to the miner nodes before they reach consensus.Current mempools assume an homogeneous network topology where all machines have the same bandwidth and latency.This unrealitic assumption forces the system to progress at the same speed as the slowest node in the system. 
 +This project aims at implementing ​mempool which exploits the heterogeneity ​of the network to speed up data dissemination for cryptocurrency systems. 
 +This is a practical project which requires good knowledge in network programming,​ either Go or C++, distributed algorithms. Contact Gauthier Voron <​gauthier.voron@epfl.ch>​ for more information. 
 + 
 +  * **Robust mean estimation**:​ In recent years, many algorithms have been proposed to perform robust mean estimation, which has been shown to be equivalent ​to robust gradient-based machine learning. A new concept ​has been proposed ​to define the performance of a robust mean estimator, called the [[https://​arxiv.org/​abs/​2008.00742|averaging constant]] (along with the Byzantine resilience). This research project consists of computing the theoretical averaging constant of different ​proposed robust mean estimators, and to study their empirical performances on randomly generated vectors. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information. 
 + 
 + 
 +  * **Accelerate Byzantine collaborative learning**: [[https://​arxiv.org/​abs/​2008.00742|Our recent NeurIPS paper]] proposed algorithms for collaborative machine learning in the presence of Byzantine nodeswhich have been proved to be near optimal with respect ​to optimality at convergence. However, ​these algorithms ​require all-to-all communication at every round, which is suboptimalThis research consists ​of designing a practical solution ​to Byzantine collaborative learning, based on the idea of a random communication network at each round, with both theoretical guarantees and practical ​implementation. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information.
  
-  * **GAR performances on different datasets**: Robust machine learning on textual data and content recommendation is critical for the safety of social media users (harassment,​ hate speech, etc.), but also for the reliability of scientific use of natural language processing such for processing computer programs, chemistry and drug discovery. Text datasets are known to have long-tailed distributions,​ which poses specific challenges for robustness, while content recommendation datasets may feature clusters of similar users. The goal of this project is to better understand the properties of different datasets, and what makes a gradient aggregation rule (e.g. Krum, trimmed mean...) better than another, given a specific text dataset (conversational chatbots, translation,​ github code etc.). Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]] ​ for more information. 
  
-  * **Strategyproof collaborative filtering**:​ In collaborative filtering, other users' inputs are used to generalize the preferences of a given user. Such an approach has been critical to improve performance. However, it exposes each user to being manipulated by the inputs of malicious users, which is arguably currently occurring on social medias. In this theoretical project, we search for Byzantine-resilient and strategyproof learning algorithms to perform something akin to collaborative filtering. This would also have important applications for implicit voting systems on exponential-size decision sets. Contact [[https://​people.epfl.ch/​le.hoang|Lê Nguyên Hoang]] for more information. 
  
   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.   * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
Line 49: Line 55:
 \\ \\
  
-  * **GANs with Transformers**: Since their introduction in 2017, the Transformer architecture revolutionized ​the NLP machine learning modelsThanks ​to the +  * **Hijacking proof-of-work to make it useful: distributed gradient-free learning approach**: Proof-of-work blockchains - notably Bitcoin and Ethereum - reach a probabilistic consensus about the contents of the blockchain by a mechanism of probabilistic leader electionEvery contributor ​to the consensus tries to solve a puzzleand the first one to succeed is elected a leaderallowed to create the next block and publicly add information to itThe puzzle needs to be hard to solve and easy to verifysolvable only by random guessing and not allowing for any shortcuts and allow for its difficulty ​to be tuned so that nodes don't find answers ​to it simultaneously and take different leaderships forking the chain in two. Partial cryptographic hash reversal has traditionally been a perfect candidate for such puzzle, but it has no interest outside being a challenge for blockchain. And with 100-300 PetaFLOP/s (drawing 100 TWh/y) of general purpose computational power being tied into Ethereum blockchain alone as of early 2022the waste of computational resources ​and energy ​is colossalWhile the interest ​of blockchains and the suitability of proof-of-work as a mechanism to run them is widely debated, it's at this day the mechanism for the two largest onesWe try to at least use some of that challenge useful by injecting a "​try"​ step of a (1,λ)-ES evolutionary search algorithm into the hash computation loop, slowing it down and making it do something useful in during the slowdown period. This class of evolutionary search algorithm achieves a good performance ​on black-bock optimization tasks (sometimes exceeding RL approaches in traditionally RL problems), is embarrassingly parallel, fits well the requirements for a proof-of-work function ​and can be empirically optimized ​to minimize the waste of computational resources during a training run. However, in its current state the (1,λ)-ES-based ​useful proof-of-work has been proven ​to work in cases where the data used for the training ​tasks can be fully replicated among the nodes. For numerous applications,​ it is not an option. Finding ways to solve that problem, both from a theoretical and an experimental perspective will be the goal of this project. You will need solid skills in Python (Rust and WebAssembly are a plus)basic understanding of distributed algorithms and of machine learning ​conceptsSome familiarity ​with blockchains and black box optimization is a plus, but is not a requirement. Contact [[https://​people.epfl.ch/​andrei.kucharavy|Andrei Kucharavy]] for more information
-scalability of self-attention only architectures, the models can now +\\
-scale into trillions of parametersallowing human-like capacities of +
-text generationHoweverthey are not without their own shortcomings,​ +
-notably due to their max-likelihood training mode over data that +
-contains potentially undesirable statistical associations. +
-An alternative approach ​to generative learning - Generative Adversarial +
-Networks (GANs) - perform remarkably well when it comes to images, but +
-have until recently struggled ​with textsdue to their sequential ​and +
-discrete nature that is not compatible with gradient back-propagation +
-they need to trainSome of those issues have been solved, but a major +
-one their scalability due to usage of RNNs instead of pure +
-self-attention architectures. +
-Previously, we were able to show that it is impossible to trivially +
-replace RNN layers with Transformer layers +
-(https://​arxiv.org/​abs/​2108.12275presented in RANLP2021). This project +
-will be building ​on those results ​and attempting ​to create stable +
-Transformer-based ​Text GANs based on the tricks known to stabilize +
-Transformer ​training ​or to attempt to theoretically demonstrate ​the +
-inherent instability ​of Transformer-derived architectures in adversarial +
-regime. +
- +
-You will need solid background knowledge of linear algebra, +
-acquaintance with the theory ​of machine learning, specifically neural +
-networks, as well as experience with scientific computing in Python, +
-ideally with PyTorch experienceExperience ​with NLP desirable, but not +
-required+
  
 \\ \\