Understanding how diverse groups of AI agents make collective decisions is crucial for building systems that are reliable, resilient, and beneficial to society. INFORMED AI academics are studying how varied skills, perspectives, and information sources interact in distributed AI teams.
The Hub is helping to inform design technologies to perform better in real‑world settings—from drone swarms to social networks to autonomous vehicles.
***
Diversity in decision-making can enable individuals with diverse experiences and skills to contribute to teams, and result in better outcomes. For teams of AI agents working together to achieve a goal, the same can be true!
As an example, consider a swarm of drones on a search-and-rescue mission in a complex new environment. It’s a remote area – connectivity to the drone operator(s) may be intermittent at best – so ideally the swarm can make as many decisions by itself as possible. Each drone can only sense its own local surroundings and can communicate only with nearby drones. Different drones may have differing sensing capabilities (some may have night vision, some may have radar, some may have special acoustic sensors), and may therefore have differing “opinions” about where the swarm should be searching next. The goal of minimizing the time to find the distressed parties is a challenging one in such settings.
An important basic building block in such situations is collective decision-making – so-called “distributed consensus”. The work of Co-I Ganesh examines such processes in “Weighted Voter Models”, inspired by house-hunting honeybees. In settings where honeybees have to select from one of two options as to where to build their next comb, each bee can be swayed by the “opinion” of the previous bee it has encountered. Under some assumptions, the work in [A] provides guarantees on the amount of time it takes for the swarm as a whole to arrive at a consensus, and on the probability that the swarm reaches consensus on the best option.
In a related strand of research, the work [B] of Co-I Lawry examines how opinions propagate through networks in which individuals learn by regularly pooling their beliefs with those of others, and also by updating their beliefs based on noisy evidence they have received directly. In such cases the work demonstrates the negative impact of “zealots” – individuals who stick to firmly-held beliefs despite evidence to the contrary and even if the majority of opinions differ from theirs – and how to identify them. In this context the level of trust in evidence is of central importance and the more distrustful that individuals are in the evidence they receive the more difficult it is for them to accurately identify zealots. This work has the potential to influence social network designs by suggesting mechanisms to isolate agents with predetermined agendas.
The work [C] of Co-I Prorok examines how to measure diversity of “behaviour/skills” in a principled manner in a multi-agent setting, and how to set up mechanisms to generate “controlled amounts of diversity” in a manner that is productive for the task at hand. As a striking example of the types of behaviour that can emerge, in simulations of (robotic) agents learning to play football (but not being taught “good strategy” in advance), in two teams of robots, one of which encourages diversity and the other encourages homogenous behaviour, the former type of team automatically learns the value of having a dedicated goalkeeper and thence proceeds to thrash the other team!
The diverse team of academics working on this problem with a variety of different approaches shed interesting light on different facets of fundamental behaviour in these distributed settings where collective decision-making is required, from foundational theoretical analysis, to extensive simulation-based exploration of the range of possible outcomes, to embodied experiments where robots swarms coordinate using such algorithms. The resulting research can improve outcomes from algorithms used for collective decision-making processes not just in the drone swarm example above, but settings as diverse as fleets of self-driving cars (eg: reduced rates of accidents), social networks (eg: identify bad actors), and energy grids (eg: more stable grids).
References
[A] Ayalvadi Ganesh, Sabine Hauert, and Emma Valla. “Consensus in the weighted voter model with noise-free and noisy observations” Swarm Intelligence (2025): 1-42.
[B] Jonathan Lawry. “Zealot Detection in Probabilistic Social Learning Using Bounded Confidence.” Artificial Life Conference Proceedings
[C] Matteo Bettini, Ryan Kortvelesy, and Amanda Prorok. “The impact of behavioral diversity in multi-agent reinforcement learning.”

