Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2017/12/14 14:54]
damaskin
education [2019/06/02 14:59]
seredins
Line 3: Line 3:
 ====== Education ====== ====== Education ======
  
 +\\
  
 The lab is teaching the following courses: The lab is teaching the following courses:
 \\ \\
 +
 +  * [[education/​ca_2018|Concurrent Algorithms]] (theory & practice)
 +  * [[education/​da|Distributed Algorithms]] (theory & practice)
 \\ \\
-  * [[education/​ca_2017|Concurrent Algorithms]] +The lab taught in the past the following courses: 
-  * [[education/​da|Distributed Algorithms]]+
   * <​html><​a href="​http://​moodle.epfl.ch/​course/​view.php?​id=14044">​Information,​ Calcul et Communication</​a></​html>​   * <​html><​a href="​http://​moodle.epfl.ch/​course/​view.php?​id=14044">​Information,​ Calcul et Communication</​a></​html>​
   * <​html><​a href="​http://​cowww.epfl.ch/​proginfo/​wwwhiver/">​Introduction à la Programmation Orientée Objet</​a></​html>​   * <​html><​a href="​http://​cowww.epfl.ch/​proginfo/​wwwhiver/">​Introduction à la Programmation Orientée Objet</​a></​html>​
Line 21: Line 24:
 ===== Master Projects ===== ===== Master Projects =====
  
-LPD offers master projects in the following areas:+DCL offers master projects in the following areas:
  
-  * **Dynamically Distributed Spatial Indexing**:  ​a project here would consist in studying existing spatial index data structures and algorithmse.g., simple gridsQuadtrees, R-Trees etc., and how they may be dynamically ​distributed ​for indexing ​large number ​of moving objects; please ​contact [[mailto:​benoit.garbinato@unil.ch|Benoit Garbinato]] to get more information.+  * **Probabilistic Byzantine Resilience**:  ​Development of high-performanceByzantine-resilient distributed systems with provable probabilistic guaranteesTwo options are currently availableboth building on previous work on probabilistic Byzantine broadcast: (i) a theoretical projectfocused the correctness of probabilistic Byzantine-tolerant ​distributed ​algorithms; (ii) practical project, focused on numerically evaluating ​of our theoretical results. Please ​contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information.
  
  
-  * **Multicore ​computing**: ​a project here would consist for instance in designing and implementing efficient lock-based or lock-free shared objects; please ​contact ​ ​[[http://​tudordavid.com|Tudor David]] or [[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] ​to get more information.+  * **Distributed ​computing ​using RDMA and/or NVRAM**: contact [[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] ​for more information.
  
-  * **Dynamic distributed computing**: a project here would consist for instance in designing and implementing applications that would run in a simulation of a cloud with high churn, but possibly robust to arbitrary behavior of some of its components; please ​contact [[http://​people.epfl.ch/​matej.pavlovic|Matej Pavlovic]] to get more information.+  * **[[Distributed ML|Distributed Machine Learning]]**: contact [[http://​people.epfl.ch/​georgios.damaskinos|Georgios Damaskinos]] for more information.
  
-  * **[[List of Projects|Recommender systems]]**+  * **Robust Distributed Machine Learning**: With the proliferation of big datasets and models, Machine Learning is becoming distributed. Following the standard parameter server model, the learning phase is taken by two categories of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e., said Byzantine) affecting the model convergence in the learning phase. Our goal in this project is to build a system that is robust against Byzantine behavior of both parameter server and workers. Our first prototype, AggregaThor(https://​www.sysml.cc/​doc/​2019/​54.pdf),​ describes the first scalable robust Machine Learning framework. It fixed a severe vulnerability in TensorFlow and it showed how to make TensorFlow even faster, while robust. Contact ​[[https://​people.epfl.ch/​arsany.guirguis|Arsany Guirguis]] for more information.
  
-  * **[[Distributed ML|Distributed ​Machine Learning]]**+  * **Stochastic gradient: (artificial) reduction of the ratio variance/​norm for adversarial distributed SGD**: One computationally-efficient and non-intrusive line of defense for adversarial distributed SGD (e.g. 1 parameter server distributing the gradient estimation to several, possibly adversarial workers) relies on the honest workers to send back gradient estimations with sufficiently low variance; assumption which is sometimes hard to satisfy in practice. One solution could be to (drastically) increase the batch-size at the workers, but doing so may as well defeat the very purpose of distributing the computation. \\ In this project, we propose two approaches that you can choose to explore (also you may propose a different approach) to (artificially) reduce the ratio variance/​norm of the stochastic gradients, while keeping the benefits of the distribution. The first proposed approach, speculative,​ boils down to "​intelligent"​ coordinate selection. The second makes use of some kind of "​momentum"​ at the workers. \\ [1] [[https://​papers.nips.cc/​paper/​6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent|"Machine Learning ​with Adversaries:​ Byzantine Tolerant Gradient Descent" ​]]  \\ [2] [[https://​arxiv.org/​abs/​1610.05492|"​Federated Learning: Strategies for Improving Communication Efficiency"​]] \\ Contact ​ [[https://​people.epfl.ch/​sebastien.rouault|Sébastien Rouault]] for more information.
  
-  * **Distributed and Fault-tolerant algorithms**:​ projects here would consist in designing failure detection mechanisms suited for large-scale systems, real-time systems, and systems with unreliable communication or partial synchrony. This task also involves implementing,​ evaluating, and simulating the performance of the developed mechanisms to verify the achievable guarantees; please contact [[http://​people.epfl.ch/​david.kozhaya|David Kozhaya]] to get more information. 
  
-  * **Consistency in global-scale storage systems**: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to [[http://​retwis.redis.io/​|Retwis]],​ or [[https://​github.com/​share/​sharejs|ShareJS]]) to recommender systems, static content storage services (à la [[https://​www.usenix.org/​legacy/​event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​s Haystack]]),​ or experimenting with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adrian Seredinschi]] for further information.+  * **Consistency in global-scale storage systems**: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to [[http://​retwis.redis.io/​|Retwis]],​ or [[https://​github.com/​share/​sharejs|ShareJS]]) to recommender systems, static content storage services (à la [[https://​www.usenix.org/​legacy/​event/​osdi10/​tech/​full_papers/​Beaver.pdf|Facebook'​s Haystack]]),​ or experimenting with well-known cloud serving benchmarks (such as [[https://​github.com/​brianfrankcooper/​YCSB|YCSB]]);​ please contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adrian Seredinschi]] ​or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]]  ​for further information.
  
-  * **Distributed database algorithms**: project ​here would consist in implementing and evaluating protocols that are running in today'​s database ​systems, e.g., [[https://​en.wikipedia.org/​wiki/​Two-phase_commit_protocol|2PC]],​ and comparing them with those protocols that can  potentially be used in future database systems; please ​contact [[http://​people.epfl.ch/​jingjing.wang|Jingjing Wang]] to get more information.+  * **[[cryptocurrencies|Cryptocurrencies]]**: We have several ​project ​openings as part of our ongoing research on designing new cryptocurrency ​systems. ​Please ​contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].
  
-  * **Machine learning attacks privacy**: a project here would consist in implementing attacks to privacy-preserving platforms using machine learning (e.g., a neural network); please contact [[http://​people.epfl.ch/​mahsa.taziki|Mahsa Taziki]] to get more information. 
  
  
Line 50: Line 51:
 If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project. If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project.
  
-EPFL I&C duration, credits and workload information are available [[http://ic.epfl.ch/page-17123.html|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.+EPFL I&C duration, credits and workload information are available [[https://www.epfl.ch/schools/​ic/​education/​|here]]. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.