I am a Ph.D. student in the Human-Computer Interaction Institute (HCII) within the School of Computer Science at Carnegie Mellon University. I am fortunate to be advised by Steven Wu and Ken Holstein.

I develop statistical tools for measuring the capabilities and limitations of algorithmic systems. I am especially interested in developing practical methods for addressing sociotechnical evaluation challenges — e.g., related to imperfect “ground truth” labels and unobserved contextual information. My work is generously supported by an NSF Graduate Research Fellowship and the Center for Advancing Safety of Machine Intelligence (CASMI).

Previously, I completed my Master’s in Computer Science at Cambridge. I also studied Computer Science and Psychology at the University of Missouri, where I co-founded TigerAware, a mobile research platform.


Research Keywords: Sociotechnical Evaluation, Measurement, Validity, Uncertainty Quantification, Human-Algorithm Decision-Making.


Luke Guerdan
lguerdan [at] cs.cmu.edu

News & Travel


Oct 2024 I will give a talk at the INFORMS 24’ Session on Human-Centered AI and Decision Making for Social Good.
May 2024 I gave a talk at the workshop on Bridging Prediction and Intervention Problems in Social Systems at Banff International Research Station.
May 2024 I am excited to intern with Alexandra Chouldechova, Solon Barocas and Hanna Wallach in the Fairness, Accountability, Transparency and Ethics (FATE) group at Microsoft Research NYC this summer.
May 2024 New work on Predictive Performance Comparison of Decision Policies Under Confounding accepted at ICML 2024.
Feb 2024 I gave a talk “Human-Algorithm Decision-Making Under Imperfect Proxy Labels” at the 2024 Lecture Series on Network Inequality at CSH Vienna.
Jun 2023 Our work Counterfactual Prediction Under Outcome Measurement Error won a Best Paper Award at FAccT 23’.
Apr 2023 Two papers accepted at FAccT 23’.

Selected Work


  1. Predictive Performance Comparison of Decision Policies Under Confounding Luke Guerdan, Amanda Coston, Kenneth Holstein, and Zhiwei Steven Wu Proceedings of the International Conference on Machine Learning (ICML), 2024 [arXiv] [Code]
  2. Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Kate Glazko, Matthew Lee, Scott Carter, Nikos Arechiga, Haiyi Zhu, and Kenneth Holstein Proceedings of the ACM Collective Intelligence Conference (CI), 2023 [arXiv]
  3. Counterfactual Prediction Under Outcome Measurement Error Luke Guerdan, Amanda Coston, Kenneth Holstein, and Zhiwei Steven Wu Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023 [PDF] [Video] [Code]  Best Paper Award
  4. Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, and Kenneth Holstein Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023 [PDF] [Video]
  5. Under-reliance or misalignment? How Proxy Outcomes Limit Measurement of Appropriate Reliance in AI-assisted Decision-Making Luke Guerdan, Kenneth Holstein, and Zhiwei Steven Wu ACM CHI 2022 Workshop on Trust and Reliance in AI-Human Teams (CHI TRAIT), 2022 [PDF] [Video]  Spotlight Talk