Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
education [2019/06/17 15:45]
zablotch
education [2025/05/05 15:53] (current)
fablpd
Line 8: Line 8:
 \\ \\
  
-  * [[education/​ca_2018|Concurrent ​Algorithms]] (theory & practice) +  * [[education/​ca_2024|Concurrent ​Computing (CS-453)]] (theory & practice) 
-  * [[education/​da|Distributed Algorithms]] (theory & practice)+  * [[education/​da_2023|Distributed Algorithms ​(CS-451)]] (theory & practice)
 \\ \\
 The lab taught in the past the following courses: The lab taught in the past the following courses:
Line 28: Line 28:
   * **[[cryptocurrencies|Cryptocurrencies]]**:​ We have several project openings as part of our ongoing research on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].   * **[[cryptocurrencies|Cryptocurrencies]]**:​ We have several project openings as part of our ongoing research on designing new cryptocurrency systems. Please contact [[rachid.guerraoui@epfl.ch|Prof. Rachid Guerraoui]].
  
-  * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information. 
  
 +  * **Scalable Distributed Cache Coherency**:​ The distributed cache coherency problem is a critical challenge in modern computing systems, especially as cloud platforms, large-scale web services, and in-memory data grids become increasingly central to industry. In distributed systems, ensuring that all nodes have a consistent view of shared data, even when copies are cached locally, is essential for correctness and reliability. Without coherent caches, stale reads can lead to data races, inconsistency,​ and subtle bugs—issues that can severely impact services like Amazon DynamoDB, Google Spanner, or Meta’s TAO graph store, where distributed caching is used at massive scale. Companies like Google and Microsoft invest heavily in research and infrastructure (e.g., Tardis, Azure Service Fabric) to implement scalable, low-latency cache coherence protocols across thousands of machines. Efficient distributed cache coherence not only improves performance and availability but also enables stronger consistency guarantees in systems that underpin everything from real-time recommendations to financial transactions. As systems grow more distributed and memory-centric,​ solving this problem becomes foundational to scaling modern software infrastructures. However, current distributed cache coherency protocols suffer from excessive communication overhead, limiting scalability. The question is how to offset such an overhead while maintaining the overall protocol'​s performance. If interested, contact [[https://​people.epfl.ch/​beatrice.shokry|Beatrice Shokry]].
  
-  * **Distributed computing using RDMA and/or NVRAM.** RDMA (Remote Direct Memory Access) allows accessing a remote machine'​s memory without interrupting its CPU. NVRAM is byte-addressable persistent (non-volatile) memory with access times on the same order of magnitude as traditional (volatile) RAM. These two recent technologies pose novel challenges and raise new opportunities in distributed system design and implementation. Contact [[https://​people.epfl.ch/​igor.zablotchi|Igor Zablotchi]] for more information. 
  
-  * **[[Distributed ML|Distributed Machine Learning]]**:​ contact [[http://​people.epfl.ch/​georgios.damaskinos|Georgios Damaskinos]] for more information. 
  
-  * **Robust Distributed Machine Learning**: With the proliferation of big datasets and modelsMachine Learning is becoming distributedFollowing ​the standard parameter server model, the learning phase is taken by two categories ​of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e.said Byzantine) affecting the model convergence in the learning phaseOur goal in this project ​is to build a system ​that is robust against Byzantine behavior of both parameter server and workersOur first prototype, AggregaThor(https://​www.sysml.cc/​doc/​2019/​54.pdf), describes the first scalable robust Machine Learning frameworkIt fixed a severe vulnerability ​in TensorFlow ​and it showed how to make TensorFlow even fasterwhile robustContact ​[[https://​people.epfl.ch/​arsany.guirguis|Arsany Guirguis]] for more information.+  * **Designing and Evaluating Efficient Concurrent Algorithms:** Multiprocessor computations are fundamental to modern computingwith concurrent data structures serving as their core building blocksDespite ​the extensive study of concurrent algorithmsmany key challenges remain unsolvedThe project ​focuses on designing efficient concurrent data structures ​to address problems ​that have gained attention in recent years but still lack efficient solutionsThe student will design solutions that ensure correctness ​(linearizability), implement them and benchmark their performance against existing approachesThis will provide hands-on experience ​in both the theory ​and practice of concurrent algorithmsletting the student explore the intricate balance between concurrency,​ performance and correctness in real-world applicationsFor more information,​ contact ​[[https://​people.epfl.ch/​gal.sela|Gal Sela]].
  
-  * **Stochastic gradient(artificial) reduction of the ratio variance/​norm for adversarial distributed SGD**: One computationally-efficient and non-intrusive line of defense ​for adversarial distributed SGD (e.g. 1 parameter server distributing ​the gradient estimation to severalpossibly adversarial workersrelies on the honest workers to send back gradient estimations with sufficiently low variance; assumption ​which is sometimes hard to satisfy in practiceOne solution could be to (drasticallyincrease ​the batch-size at the workers, but doing so may as well defeat the very purpose ​of distributing the computation. \\ In this project, we propose ​two approaches that you can choose ​to explore (also you may propose ​different approach) ​to (artificiallyreduce ​the ratio variance/​norm ​of the stochastic gradientswhile keeping the benefits ​of the distributionThe first proposed approachspeculativeboils down to "​intelligent"​ coordinate selection. The second makes use of some kind of "​momentum"​ at the workers. \\ [1] [[https://​papers.nips.cc/​paper/​6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent|"​Machine Learning with Adversaries:​ Byzantine Tolerant Gradient Descent"​ ]]  \\ [2] [[https://​arxiv.org/​abs/​1610.05492|"​Federated Learning: Strategies for Improving Communication Efficiency"​]] \\ Contact  ​[[https://​people.epfl.ch/​sebastien.rouault|Sébastien Rouault]] for more information.+  * **Accelerating Safe ML Systems:** ML has been a hot topic for so long. Now with LLMs, it is getting even more attractive for everyone in the research community as well as industry ​(e.g., Google, Meta, etc.). In particular, training large models with massive data makes the need for distributed computing (i.e.distributing tasks among machinesnon questionable, ​which leads to two main challengesFirst, how to do it fast? Second, how to do it safe (e.g., secure collaborative training, robust ML, etc.)? At the heart of these two challenges is how to communicate with other machines in fast and a secure way? This leads us to Remote Direct Memory Access ​(RDMAtechnology which is becoming increasingly important in the field of machine learning (ML)particularly for distributed training ​of large models and handling massive datasetsRDMA enables high-throughputlow-latency data transfers between servers without involving the CPUwhich significantly reduces ​the overhead associated ​with traditional networking methodsThis is crucial for ML tasks that require rapid synchronization and communication among multiple nodesNow the question is how to use RDMA efficiently to build fast and secure ML systems? If interested, contact ​[[https://​people.epfl.ch/​Beatrice.Shokry?​lang=en|Beatrice Shokry]] for more information.
  
 +  * **Tackling data heterogeneity in Byzantine-robust ML**: Context: Distributed ML is a very effective paradigm to learn collaboratively when all users correctly follow the protocol. However, some users may behave adversarially and measures should be taken to protect against such Byzantine behavior [ [[https://​papers.nips.cc/​paper/​2017/​hash/​f4b9ec30ad9f68f89b29639786cb62ef-Abstract.html|1]],​ [[https://​proceedings.mlr.press/​v162/​farhadkhani22a.html|2]] ]. In real-world settings, users have different datasets (i.e. non-iid), which makes defending against Byzantine behavior challenging,​ as was shown recently in  [ [[https://​proceedings.neurips.cc/​paper/​2021/​hash/​d2cd33e9c0236a8c2d8bd3fa91ad3acf-Abstract.html|3]],​ [[https://​openreview.net/​forum?​id=jXKKDEi5vJt|4]] ]. Some defenses were proposed to tackle data heterogeneity,​ but their performance is suboptimal on simple learning tasks. Goal: Develop defenses with special emphasis on empirical performance and efficiency in the heterogeneous setting. Contact [[https://​people.epfl.ch/​youssef.allouah?​lang=en|Youssef Allouah]] for more information.
  
-  * **Consistency ​in global-scale ​storage ​systems**: ​We offer several projects ​in the context ​of storage systemsranging from implementation ​of social applications ​(similar ​to [[http://retwis.redis.io/|Retwis]], or [[https://github.com/share/sharejs|ShareJS]]to recommender systemsstatic content storage services ​(à la [[https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf|Facebook'​s Haystack]]), or experimenting ​with well-known cloud serving benchmarks (such as [[https://github.com/​brianfrankcooper/YCSB|YCSB]]); please ​contact [[http://​people.epfl.ch/​dragos-adrian.seredinschi|Adi Seredinschi]] or [[https://​people.epfl.ch/​karolos.antoniadis|Karolos Antoniadis]]  for further ​information.+  * **Benchmark to certify Byzantine-robustness ​in ML**: Context: Multiple attacks have been proposed to instantiate a Byzantine adversary in distributed ML [ [[https://​proceedings.neurips.cc/​paper/​2019/​hash/​ec1c59141046cd1866bbbcdfb6ae31d4-Abstract.html|1]],​ [[https://​proceedings.mlr.press/​v115/​xie20a.html|2]] ]. While these attacks have been successful against known defenses, it remains unknown whether stronger attacks exist. As such, a strong benchmark is needed, to go beyond the cat-and-mouse game illustrating the existing research. Ideally, similar to other ML subfields such as privacy-preserving ML or adversarial examples, the desired benchmark should guarantee that no stronger attack exists. Goal: Develop a strong benchmark for attacks in Byzantine ML. Contact [[https://​people.epfl.ch/​youssef.allouah?​lang=en|Youssef Allouah]] for more information. 
 + 
 + 
 + 
 +  * **Evaluating Distributed Systems**: By nature, distributed systems are hard to evaluate. Deploying real world systems and orchestrating large scale experiments require dedicated software and expensive infrastructure. As a result, many widespread distributed ​systems ​are not properly evaluated, tested on uncomparable or irreproductible setups. Projects of this category aim to build efficient and scalable evaluation tools for distributed systems. [[https://​dl.acm.org/​doi/​10.1145/​3552326.3567482|Diablo]]-related projects involve building a test harness for evaluating blockchains (skills required: network programming,​ blockchain, Go, C++). Another set of projects focus on creating ​**large networks simulators** able to emulate hundreds of powerful machines from a single physical server (skills requiredsystem programming,​ virtualization,​ C, C++). Contact [[https://​people.epfl.ch/​gauthier.voron/?​lang=en|Gauthier Voron]] for more information. 
 + 
 +  * **Smart Contracts and Decentralized Software**: Smart contracts are one of the key innovations brought by blockchains,​ enabling users to deploy codes that get executed transparently,​ autonomously and in a decentralized fashion. However, the applicability of smart contracts is hampered by their limited performance. Projects of this category aim to build runtime environments for fast and efficient execution of smart contracts. The first set of projects address the challenge of **deterministic parallelism**,​ or how to use several ​threads to execute a smart contract while guaranteeing a deterministic result (skills required: compiler principles, Rust). The second set of projects ​explores ​the concept ​of non-transactional smart contractsa way to remove the notion ​of gas in smart contracts ​(skills required: system programming,​ C, Rust). The last set of projects focus on high-throughput cryptographic primitives: how to use hardware acceleration to speed up transaction authentication (skills required: cryptography principles, GPU programming,​ C, Assembly). Contact ​[[https://people.epfl.ch/gauthier.voron/?​lang=en|Gauthier Voron]] for more information. 
 + 
 +  * **Safe and Scalable Consensus**:​ Decentralized systems like cryptocurrencies rely on the concept of consensus. This component is critical as it dictates how performantsafe and scalable a distributed system is. Over the last years, the DCL has pushed the performance of consensus algorithms to [[https://arxiv.org/pdf/2304.07081|unprecedented levels]] but the practical safety and scalability are yet to be addressed. Projects of this category focus on designing and implementing distributed consensus algorithms which are safer against cyberattacks or adverse environments and work with higher number of participants. On one sidesome projects explore new **consensus designs** with good theoretical guarantees and practical behaviors ​(skills required: distributed algorithms, network programming,​ Go). On the other side, some projects focus on ensuring the correctness of existing consensus algorithms through **model checking** at various levels (skills required: distributed algorithms, Rust, TLA+). Contact ​[[https://people.epfl.ch/gauthier.voron/?​lang=en|Gauthier Voron]] for more information. 
 + 
 + 
 +  * **Robust mean estimation**:​ In recent years, many algorithms have been proposed to perform robust mean estimation, which has been shown to be equivalent to robust gradient-based machine learning. A new concept has been proposed to define the performance of a robust mean estimator, called the [[https://arxiv.org/abs/2008.00742|averaging constant]] (along with the Byzantine resilience). This research project consists of computing the theoretical averaging constant of different proposed robust mean estimatorsand to study their empirical performances on randomly generated vectors. Contact [[https://​people.epfl.ch/​sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information. 
 + 
 + 
 +  * **Accelerate Byzantine collaborative learning**: [[https://​arxiv.org/​abs/​2008.00742|Our recent NeurIPS paper]] proposed algorithms for collaborative machine learning in the presence of Byzantine nodes, which have been proved to be near optimal ​with respect to optimality at convergence. However, these algorithms require all-to-all communication at every round, which is suboptimal. This research consists of designing a practical solution to Byzantine collaborative learning, based on the idea of a random communication network at each round, with both theoretical guarantees and practical implementation. Contact ​[[https://people.epfl.ch/sadegh.farhadkhani?​lang=en|Sadegh Farhadkhani]] for more information. 
 + 
 + 
 + 
 +  * **Probabilistic Byzantine Resilience**: ​ Development of high-performance,​ Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (ia theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms(ii) a practical project, focused on numerically evaluating of our theoretical results. Please ​contact [[matteo.monti@epfl.ch|Matteo Monti]] to get more information. 
 + 
 +  * **Microsecond-scale dependable systems.** Modern networking technologies such as RDMA (Remote Direct Memory Access) allow for sub-microsecond communication latency. Combined with emerging data center architectures,​ such as disaggregated resources pools, they open the door to novel blazing-fast and resource-efficient systems. Our research focuses on designing such microsecond-scale systems that can also tolerate faults. Our vision is that tolerating network asynchrony as well as faults (crash and/or Byzantine) is a must, but that it shouldn'​t affect the overall performance of a system. We achieve this goal by devising and implementing novel algorithms tailored for new hardware and revisiting theoretical models to better reflect modern data centers. Previous work encompasses microsecond-scale (BFT) State Machine Replication,​ Group Membership Services and Key-Value Stores (OSDI'​20,​ ATC'22 and ASPLOS'​23). Overall, if you are interested in making data centers faster and safer, contact [[https://​people.epfl.ch/​athanasios.xygkis|Athanasios Xygkis]] and [[https://​people.epfl.ch/​antoine.murat|Antoine Murat]] for more information. 
 + 
 + 
 + 
  
  
 \\ \\
  
 +
 +
 +\\
  
 ===== Semester Projects ===== ===== Semester Projects =====
Line 50: Line 75:
 If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project. If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project.
  
-EPFL I&C duration, credits and workload information are available [[https://​www.epfl.ch/​schools/​ic/​education/​|here]]Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular ​semester ​period.+EPFL I&C duration, credits and workload information are available ​on [[https://​www.epfl.ch/​schools/​ic/​education/​master/​semester-project-msc/|https://www.epfl.ch/​schools/​ic/​education/​master/​semester-project-msc/​]] 
 +