Building trustworthy, human-centred AI systems is essential for the future of technology. It’s not just about technical innovation—it requires collaboration, adaptability, and strong data integrity. At INFORMED AI Hub, we champion these principles through research that advances responsible AI, ensuring systems are fair, reliable, and scalable. Two recent studies from Decentralised Learning Under Information Constraints provide critical insights into collective intelligence, swarm robotics, and data diversity strategies—key pillars for AI that learns with humans, not just for them.
Robust Consensus in Swarm Robotics
Consensus in the Weighted Voter Model with Noise-Free and Noisy Observations (Ganesh et al)
Key Insights:
- Analyzes the Weighted Voter Model for best-of-n decision-making in swarm robotics.
- Provides exact probabilities and time bounds for consensus under both noise-free and noisy conditions.
- Demonstrates robustness: a single informed agent can influence large populations, with error probabilities decaying exponentially as initial support for the best option increases.
- Reduces reliance on costly simulations by offering predictive guarantees for swarm behavior.
Ensuring Diversity and Quality in Categorical Datasets
Reduced Similarity Decompositions and Subsets of Random Categorical Datasets (Ganesan)
Key Insights:
- Introduces Unique Batch Decompositions (UBDs) to counter pseudo-similarity in categorical datasets.
- Establishes probabilistic bounds for similarity-free subsets, supporting fairer sampling in imbalanced datasets.
- Shows conditions under which entire datasets can be similarity-free with high probability, enabling leaner and more efficient data pipelines.
- Improves data curation strategies for large-scale AI systems, enhancing generalization and reducing bias.
Both studies highlight the importance of adaptability and trust in AI systems—whether through robust consensus mechanisms in distributed agents or integrity in data foundations. These contributions reflect INFORMED AI’s mission to create ethical, human-centred AI, ensuring fairness, reliability, and resilience in future technologies.
*Hub acknowledges the use of Microsoft CoPilot to assist with the drafting of this blog.

