Press "Enter" to skip to content

Work Packages

We will monitor and evaluate work done on these WP’s to inform the Phase 2 of our Hub research.  If you are interested in contributing to any of these work packages please get in touch with us.

 

 

 

Trustworthy Cooperation in Heterogeneous Teams (Prorok, Lawry and Demiris)

Driven by a confluence of AI, communications, and mobile computing ….. the next wave of computing is about agents that communicate and co-ordinate

Multi-agent problems in which each agent only has a partial view of the environment are notoriously difficult, as agents must update stochastic beliefs on the knowledge and actions of other agents. Yet, communication can enable highly effective and scalable decentralised policies, by overcoming partial observability and enabling better cooperation. Recent work has shown the potential benefits of behavioural heterogeneity in multi-agent systems, including resilience to observation noise. Motivated by this, we aim to provide new theoretical foundations, algorithms and optimization frameworks that facilitate controlled behavioural heterogeneity in agent teams. We will jointly tackle the multi-agent learning and communication problems over noisy, resource-limited communication links, leading to a new class of problems.

Learning to Compress and Communicate (Gündüz, Kontoyiannis, Johnson and Jaggi)

Computation, communication and coordination lie at the core of intelligence, which, in turn, rely on the transmission and transformation of information.

Information theory has played a prominent role in identifying the fundamental limits of compression and communication, as well as guiding the design of practical algorithms and codes achieving these limits. In parallel, there has been significant recent progress in data-driven approaches to compression and communication problems. In this WP, we will use information-theoretic principles to design efficient architectures and training strategies for learning under communication constraints. First, exploiting the natural sparsity present in most common information sources, we will establish precise achievability and converse coding theorems for lossless and lossy compression. Then, benefiting from recent advances in generative models (GANs, VAEs, diffusion models), we will design learning algorithms for joint source-channel compression and coding. Equipped with these tools, we will tackle problems such as distributed compression and multi-user communication with feedback, for which structured model-driven approaches have failed. Finally, we note that machine learning algorithms are highly sensitive to data manipulation by malicious actors and leakage of sensitive private information, problems exacerbated in multi-agent systems. We will address privacy and security challenges by significantly extending current robustness frameworks based on error-correcting codes.

AI Theory for Quantum Information and Computing (Walton, Vasantam and Datta)

Quantum networking – where entangled qubits are transmitted with an eye towards enable quantum cryptography, computing and sensing.

Quantum machine learning (QML) is a rapidly evolving discipline which combines quantum computing and machine learning and applies data-driven strategies to the quantum realm. QML models have the potential to offer speedups and better predictive accuracy compared to their classical counterparts. Sample complexity, even of elementary problems such as classification and hypothesis testing, has not been studied for QML. Preliminary work has shown that when the same data are embedded into a classical probability distribution, or into probability amplitudes by the Grover-Rudolph embedding, quantum classification performs better. Mathematical quantities which arise in the study of quantum state discrimination are certain families of quantum divergences, which are of fundamental importance in quantum information theory. We will use tools from quantum information theory to study the sample complexity of quantum binary classification and to evaluate any advantage gained over the classical case. We will subsequently extend this to study the sample complexity of quantum versions of standard ML problems. Scaling up QML to distributed systems needs the development of quantum communication networks. To this end, we will collaborate with industry partners actively working on quantum AI algorithms. Linkages between information theory and AI, specifically reinforcement learning, can be used to improve resource allocation in quantum networks, with applications to fusion-based quantum computing. Quantum communication requires entangled qubits, whose creation is noisy and unreliable. There are a variety of challenges that involve information-theoretic and Markov decision process frameworks to design optimal algorithms. In this WP, we will investigate the role of information and reinforcement learning in scheduling and optimization of quantum computing systems

Decentralised Learning Under Information Constraints (Ganesh, Loh)

Differential privacy problems – computation is performed on distributed data with a goal of deriving useful statistics, without leaking private information

This work package focuses on problems of statistical estimation, hypothesis testing and decision-making in distributed systems, subject to additional constraints such as communication and privacy. A large population of heterogeneous agents each receive independent observations about a “true state of nature,” which gradually evolves over time, but with occasional abrupt changes. Agents take actions and receive rewards which are a function of the state of nature, the agent’s intrinsic characteristics and the weighted actions of other agents. The objective is to learn the true state of nature, quickly detect abrupt changes, or maximise either individual or collective rewards. Agents may cooperate to achieve these goals, subject to constraints on communication and privacy (messages must not reveal too much about agent characteristics or data specific to them). We will establish upper and lower bounds on the sample complexity of the aforementioned tasks and develop computationally efficient algorithms for them. For each problem, we will be interested in bounds which are instance optimal, as opposed to minimax optimal, leading to a finer-grained understanding of the fundamental limits of distributed learning. We will also study the impact of network topology on the speed of learning, and robust learning in the presence of adversarial agents. Applications include Internet-of-Things (IoT) networks, edge computing, and federated machine learning from healthcare data. Security and privacy are major challenges, which we aim to address by leveraging concepts from information theory and differential geometry.