Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2022/02/25 10:59]
fablpd
education [2022/02/25 11:01]
fablpd
Line 56: Line 56:
 \\ \\
  
 +  * **Hijacking proof-of-work to make it useful: distributed gradient-free learning approach**: Proof-of-work blockchains - notably Bitcoin and Ethereum - reach a probabilistic consensus about the contents of the blockchain by a mechanism of probabilistic leader election. Every contributor to the consensus tries to solve a puzzle, and the first one to succeed is elected a leader, allowed to create the next block and publicly add information to it. The puzzle needs to be hard to solve and easy to verify, solvable only by random guessing and not allowing for any shortcuts and allow for its difficulty to be tuned so that nodes don't find answers to it simultaneously and take different leaderships forking the chain in two. Partial cryptographic hash reversal has traditionally been a perfect candidate for such puzzle, but it has no interest outside being a challenge for blockchain. And with 100-300 PetaFLOP/s (drawing 100 TWh/y) of general purpose computational power being tied into Ethereum blockchain alone as of early 2022, the waste of computational resources and energy is colossal. While the interest of blockchains and the suitability of proof-of-work as a mechanism to run them is widely debated, it's at this day the mechanism for the two largest ones. We try to at least use some of that challenge useful by injecting a "​try"​ step of a (1,λ)-ES evolutionary search algorithm into the hash computation loop, slowing it down and making it do something useful in during the slowdown period. This class of evolutionary search algorithm achieves a good performance on black-bock optimization tasks (sometimes exceeding RL approaches in traditionally RL problems), is embarrassingly parallel, fits well the requirements for a proof-of-work function and can be empirically optimized to minimize the waste of computational resources during a training run.
 +However, in its current state the (1,​λ)-ES-based useful proof-of-work has been proven to work in cases where the data used for the training tasks can be fully replicated among the nodes. For numerous applications,​ it is not an option. Finding ways to solve that problem, both from a theoretical and an experimental perspective will be the goal of this project.
 +You will need solid skills in Python (Rust and WebAssembly are a plus), basic understanding of distributed algorithms and of machine learning concepts. Some familiarity with blockchains and black box optimization is a plus, but is not a requirement. Contact Andrei Kucharavy (andrei.kucharavy@epfl.ch) for more information.