Decentralised Learning Under Information Constraints
Work packagesAuthors: Ganesh, Jaggi, Loh
Differential privacy problems: computation is performed on distributed data with a goal of deriving useful statistics, without leaking private information.
This work package focuses on problems of statistical estimation, hypothesis testing and decision-making in distributed systems, subject to additional constraints such as communication and privacy.
A large population of heterogeneous agents each receive independent observations about a “true state of nature,” which gradually evolves over time, but with occasional abrupt changes. Agents take actions and receive rewards which are a function of the state of nature, the agent’s intrinsic characteristics and the weighted actions of other agents.
The objective is to learn the true state of nature, quickly detect abrupt changes, or maximise either individual or collective rewards. Agents may cooperate to achieve these goals, subject to constraints on communication and privacy (messages must not reveal too much about agent characteristics or data specific to them).
We will establish upper and lower bounds on the sample complexity of the aforementioned tasks and develop computationally efficient algorithms for them. For each problem, we will be interested in bounds which are instance optimal, as opposed to minimax optimal, leading to a finer-grained understanding of the fundamental limits of distributed learning.
We will also study the impact of network topology on the speed of learning, and robust learning in the presence of adversarial agents. Applications include Internet-of-Things (IoT) networks, edge computing, and federated machine learning from healthcare data.
Security and privacy are major challenges, which we aim to address by leveraging concepts from information theory and differential geometry.