Press "Enter" to skip to content

Research

INFORMED-AI tackles the fundamental challenges in collective intelligence and distributed AI, focusing on the following overarching themes and exploiting the research strengths of the core groups in Bristol, Imperial, Cambridge and Durham.

Our research will be conducted according to Work Packages which pool expertise across institutions. Each work package will pursue objectives which cut across the 3 research themes (below) and is led by leading academics and post doctoral researchers from across our institution partners.

Trustworthy Collective Intelligence:   

Transparent distributed AI with self-monitoring and threat detection.  

Security and trustworthiness are paramount in applications such as critical infrastructure networks. A core strength of distributed systems such as the internet is that they do not have a central controller constituting a single point of failure.  Instead, they exploit local interactions between agents, which makes them highly scalable, fault-tolerant and adaptive to  changing environments. However, agents can be compromised and hence a core challenge for distributed AI is the development of algorithms which combine local behaviour monitoring with global consensus formation. This will be central to ensuring reliable operation by quickly identifying and isolating threats.

Connectivity and Resilience:  

Impact of communication network topology on efficiency and robustness.  

A considerable body of research exists on the effect of communication network topology on the speed of information spread and consensus formation. Somewhat counterintuitively, full connectivity is not always best and may not be possible due to communication constraints. By forming or breaking connections, intelligent agents can shape their network at the same time as using it for core tasks of communication and learning. Thus, an important research thread is to develop decentralised algorithms for optimising network topology jointly with the underlying learning or coordination task, subject to communication constraints and resilience  requirements.

Heterogeneous Distributed AI:  

Algorithms and optimization frameworks for agent teams with behavioural heterogeneity.  

Agents in distributed systems are often governed by different policies, use different hardware and software and have different objectives. Even in cases of functional homogeneity, such as swarms of identical robots, behavioural heterogeneity can emerge, whereby individuals learn different policies or pursue different goals. Indeed, some forms of behavioural heterogeneity are known to be advantageous in multi-agent reinforcement learning, as different behaviour types can improve robustness and flexibility of the system, allowing for better adaptation to changing conditions and requirements. Understanding how behavioural heterogeneity leads to enhanced performance can provide insights into the design of mechanisms (in the game-theoretic sense) which facilitate the emergence of task specialization and cooperation in distributed systems.