Distributed Machine Learning
Modern machine learning algorithms operate over a huge volume of data thus highlighting the demand for distributed solutions both from the system and algorithmic perspective.
Asynchronous ML on android devices
This project is related to training ML algorithms asynchronously on Android devices. The challenges here are primarily: mobile churn, latency, energy consumption, memory, bandwidth and accuracy. This project involves multiple semester projects that tackle subsets of these challenges from the algorithmic (SGD variants) and the system (framework for android) perspective.
Personalized/Private ML in P2P network
This project calls for private ML algorithms where data does not leave the user device and each user has her personalized version of the model. More precisely, the aim is that every mobile device has its own personalized learning model (updated periodically by cross-device communications and trained locally using local data) without sending out their data to others (but the communicated gradients also need to be private and hence the trade-off between accuracy and privacy). The major challenges will be for accuracy-privacy, memory, bandwidth and latency.
P2P data market
The goal is the design of a P2P infrastructure that enables service providers (peers) to buy and sell data. The main challenge for a candidate scheme is the definition and measurement of the data utility from the perspective of each peer. The revenue model and privacy guarantees are also two important challenges for this setting.
Federated optimization: distributed SGD with fault tolerance
This project explores the case where data does not leave each user device while certain (arbitrary) devices fail and recover. The challenge is to accelerating learning under this scenario leveraging various techniques like importance sampling.
Byzantine-tolerant machine learning
Each node in the distributed setting can exhibit arbitrary (byzantine) behaviour during the learning procedure. This project explores algorithms (SGD variants) both in the synchronous and asynchronous setup. The student will work on our code base on top of tensorflow for the implementation of these algorithms.
Black-Box attacks against recommender systems
A recommender system can be viewed as a black-box that users query with feedback (e.g., ratings, clicks) before getting the output list of recommendations. The goal is to infer properties of the recommendation algorithm by observing the output from different queries.
Multi-output multi-class classification
The goal of this project is to design a distributed ML algorithm suitable for multi-output classification (e.g. music tag prediction on mobile devices). Deep learning-based approaches seem promising for this task. Nevertheless, current methods target only single-output classification.
Contact: Georgios Damaskinos