This is an old revision of the document!


Distributed Machine Learning

Overview

Modern machine learning algorithms operate over a huge volume of data thus highlighting the demand for distributed solutions both from the system and algorithmic perspective.

Asynchronous ML on android devices

This project is related to training ML algorithms asynchronously on Android devices. The challenges here are primarily: mobile churn, latency, energy consumption, memory, bandwidth and accuracy.

Related papers:
[1] Distributed Asynchronous Online Learning for Natural Language Processing
[2] Heterogeneity-aware Distributed Parameter Servers
[3] ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning

Multi-output multi-class classification

The goal of this project is to design a distributed ML algorithm suitable for multi-output classification (e.g. music tag prediction on mobile devices). Deep learning-based approaches seem promising for this task. Nevertheless, current methods target only single-output classification.

Related papers:
[1] Deep Neural Networks for YouTube Recommendations
[2] Deep content-based music recommendation
[3] Codebook-based scalable music tagging with poisson matrix factorization

Personalized/Private ML in P2P network

This project calls for private ML algorithms where data does not leave the user device and each user has her personalized version of the model. More precisely, the aim is that every mobile device has its own personalized learning model (updated periodically by cross-device communications and trained locally using local data) without sending out their data to others (but the communicated gradients also need to be private and hence the trade-off between accuracy and privacy). The major challenges will be for accuracy-privacy, memory, bandwidth and latency.

Related papers:
[1] Decentralized Collaborative Learning of Personalized Models over Networks
[2] Privacy-Preserving Deep Learning
[3] Deep Learning with Differential Privacy

Federated optimization: distributed SGD with fault tolerance

This project explores the case where data does not leave each user device while certain (arbitrary) devices fail and recover. The challenge is to accelerating learning under this scenario leveraging various techniques like importance sampling.

Related papers:
[1] Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling
[2] Stochastic Optimization with Importance Sampling

P2P data market

The goal is the design of a P2P infrastructure that enables service providers (peers) to buy and sell data. The main challenge for a candidate scheme is the definition and measurement of the data utility from the perspective of each peer. The revenue model and privacy guarantees are also two important challenges for this setting.

Related papers:
[1] The Cost of Privacy: Destruction of Data-Mining Utility in Anonymized Data Publishing
[2] Price-Optimal Querying with Data APIs
[3] Query-Based Data Pricing

Black-Box Attacks against Recommender Systems

A recommender system can be viewed as a black-box that users query with feedback (e.g., ratings, clicks) before getting the output list of recommendations. The goal is to infer properties of the recommendation algorithm by observing the output from different queries.

Related papers:
[1] Stealing Machine Learning Models via Prediction APIs
[2] Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Contact: Georgios Damaskinos