Towards Responsible AI: Scalable Trust Modelling and Adaptive Behaviour in Complex Systems

17th November, 2025 | Research blog

Building trustworthy and adaptive AI systems is central to INFORMED AI’s mission of enabling responsible, human-centred innovation. Recent publications from our researchers (Trustworthy Cooperation in Heterogeneous Teams) explore this challenge across diverse contexts—from multi-agent reinforcement learning to human–robot interaction in industrial and assistive settings.

A Principle of Targeted Intervention for Multi-Agent Reinforcement Learning (Jianhong Wang et al)

Traditional multi-agent reinforcement learning (MARL) struggles with coordination when global guidance is impractical. This work introduces Pre-Strategy Intervention (PSI), a targeted intervention paradigm that applies guidance to a single agent rather than all agents, using Multi-Agent Influence Diagrams (MAIDs) and causal inference to steer the system toward desired outcomes.

Key findings:

  • Formal framework for analyzing MARL interaction paradigms (self-organization, global intervention, targeted intervention).
  • PSI maximizes a composite goal (primary task + additional desired outcome) through causal effect optimization.
  • Demonstrated in Hanabi and Multi-Agent Particle Environment, PSI outperforms global guidance and intrinsic reward baselines, achieving better coordination and convergence to preferred Nash equilibria.

Why it matters:  This approach offers a scalable, principled way to embed human values and conventions into multi-agent systems—critical for domains like autonomous driving and collaborative robotics.

*The author writes a separate blog about this research in A New Perspective in Understanding and Designing Multi-Agent Systems with External Intervention | by David.J.W | Nov, 2025 | Medium

An Integrated 3D Eye-Gaze Tracking Framework for Assessing Trust in HRI (Demiris, Estevez Casado et al)

Trust in robots is dynamic and context-sensitive. This study pioneers a 3D eye-gaze tracking framework using AR head-mounted displays to capture real-time gaze and spatial data during realistic interactions with a Boston Dynamics Spot robot. You can find a video of their testing on YouTube: An Integrated 3D Eye-Gaze Tracking Framework for Assessing Trust in Human-Robot Interaction THRI2025

Key findings:

  • Low-reliability conditions triggered longer fixations, larger saccades, and more gaze transitions—behaviours linked to heightened monitoring and reduced trust.
  • Bayesian modelling confirmed that combining temporal, spatial, and count-based gaze features improves prediction of trust scores.
  • 3D-specific features outperform 2D metrics, underscoring the need for context-aware trust assessment.

Why it matters: Real-time gaze analysis could enable robots to adapt behaviour dynamically, fostering transparency and collaboration in safety-critical environments.

Continuous Real-Time Adaptation Framework for Enhancing Trust and Technology Acceptance: An Assistive Feeding Study (Demiris et al)

Assistive robots operate in intimate, high-stakes settings where user comfort and trust are paramount. The CRAFTT framework adapts robot behaviour in real time based on user microexpressions, balancing comfort, efficiency, and safety through multi-objective optimisation.

Key findings:

  • Adaptive behaviour significantly improved reliance intention, fluency, and technology acceptance, with minimal efficiency trade-offs.
  • 73% of participants preferred adaptive behaviour, citing intuitiveness and personalisation.
  • Bayesian analysis validated comfort cost reduction and positive trends in trust without increasing cognitive workload.

Why it matters: CRAFTT demonstrates how human-in-the-loop adaptation can transform assistive robotics, paving the way for inclusive, user-centred design in healthcare and beyond.

Across these studies, a common thread emerges: trust and adaptability are inseparable in human–AI systems. Whether guiding multi-agent coordination, interpreting gaze as a trust signal, or adapting robot behaviour in real time, INFORMED AI research advances methods that make AI systems transparent, responsive, and aligned with human needs.

Next Steps

  • Focus on scalable trust modelling and online adaptation.
  • Integrate multimodal signals (e.g., gaze, speech, physiology) to create AI systems that learn with, not just for, humans.

*Hub acknowledges the use of Microsoft CoPilot to assist with the drafting of this blog.