![Loading Events](https://www.informed-ai.net/wp-content/plugins/the-events-calendar/src/resources/images/tribe-loading.gif)
INFORMED AI – Summer School 2025
June 16 @ 9:00 am - June 19 @ 5:00 pm
We are pleased to announce the Hub’s first summer school. The 4 day event will be held at the University of Bristol School of Mathematics from Monday 16 June to Thursday 19 June 2025.
José Miguel Hernández Lobato – Professor of Machine Learning, University of Cambridge & Chief AI Officer, Angstrom AI. Title: Diffusion models and their applications in molecular modeling
Peter Kairouz – Research Scientist at Google Title: LLM Privacy: From Principles to Practice
Maryam Kamgarpour, Director Sycamore Lab, EPFL (École Polytechnique Fédérale de Lausanne). Title: Learning equilibria in multiagent systems
For enquiries, please email informed-ai@bristol.ac.uk
***
Speaker bio’s and abstracts
Dr. Peter Kairouz is a Senior Staff Research Scientist at Google, where he leads key research and engineering initiatives. His work advances technologies like federated learning, privacy auditing, and differential privacy, driving forward responsible AI developments. Before joining Google, he completed a Postdoctoral Fellowship at Stanford University and earned his Ph.D. from the University of Illinois at Urbana-Champaign (UIUC). He is the recipient of several prestigious awards, including the 2012 Roberto Padovani Scholarship from Qualcomm’s Research Center, the 2015 ACM SIGMETRICS Best Paper Award, the 2015 Qualcomm Innovation Fellowship Finalist Award, the 2016 Harold L. Olesen Award for Excellence in Undergraduate Teaching from UIUC, and the 2021 ACM CCS Best Paper Award. Dr. Kairouz has organized numerous workshops and delivered tutorials on private learning and analytics at top-tier conferences, and he continues to serve in key editorial and leadership roles within the machine learning community.
Abstract:
Large language models (LLMs) present significant opportunities in content generation, question answering, and information retrieval, yet their training, fine-tuning, and deployment introduce significant privacy challenges. This crash course offers a concise overview of privacy-preserving machine learning (ML) in the context of this evolving landscape and the risks associated with LLMs. The course illuminates four key privacy principles inspired by known LLM vulnerabilities when handling user data: data minimization, data anonymization, transparency/consent, and verifiability. Focusing on practical applications, you’ll explore federated learning (FL) as a data minimization technique, covering its various flavors, algorithms, and implementations. You’ll then examine differential privacy (DP) as a gold standard for anonymization, learning about its properties, variants, and applications in conjunction with FL, including production deployments with formal privacy guarantees. In scenarios where achieving strong user-level DP proves difficult, you’ll discover a robust, task-and-model-agnostic membership inference attack to quantify risk by accurately estimating the actual leakage (empirical epsilon) in a single training run. You’ll see how these state-of-the-art techniques systematically mitigate many privacy risks, albeit sometimes with trade-offs in computation or performance. The course also examines verifiability through open-sourcing privacy tech and trusted execution environments. Finally, you’ll be introduced to the open research questions, challenges, and compelling future research directions that are shaping the future of privacy-preserving ML for foundation models.
Maryam Kamgarpour holds a Doctor of Philosophy in Engineering from the University of California, Berkeley and a Bachelor of Applied Science from University of Waterloo, Canada. Her research is on safe decision-making and control under uncertainty, game theory and mechanism design, mixed integer and stochastic optimization and control. Her theoretical research is motivated by control challenges arising in intelligent transportation networks, robotics, power grid systems and healthcare. She is the recipient of NASA High Potential Individual Award (2010) the European Union (ERC) Starting Grant (2015) and European Control Award (2024).”
Abstract: tbc
***
The summer school is part of the INFORMED-AI hub, one of three EPSRC-funded hubs on the mathematical foundations of artificial intelligence. A joint venture between the universities of Bristol, Cambridge, and Durham and Imperial College, London, it focuses on collective intelligence in distributed multi-agent systems such as power grids, transport and communication networks, robot swarms, etc.