Seminar on Prediction Under Intervention(s), Leiden
Department of Data Science Methods, Julius Center, University Medical Center Utrecht
2025-12-02


what could possibly go wrong?
Wouter van Amsterdam, Nan van Geloven, Jesse Krijthe, Rajesh Ranganath, Giovanni Cina; Patterns, 2025.










We formalized the simplest general case
Define:


Withholding lifesaving treatments: When AI predicts low survival for certain patients, clinicians may deny treatment, causing worse outcomes that falsely validate the model.
Rehabilitation triage bias: AI tools predicting poor recovery after surgery can lead hospitals to allocate fewer rehab resources to those patients, directly causing the poor outcomes the model anticipated.
Post-deployment performance paradox: If real-world care improves outcomes for certain patients, models trained on historical data may appear to “fail,” encouraging withdrawal of beneficial changes and reinforcing the old, harmful patterns.
Perpetuating historical under-treatment: Models trained on biased historical data may predict poor outcomes for groups who were previously under-treated, and clinicians acting on these predictions can continue the cycle, worsening outcomes and deepening disparities.
Generalisation beyond healthcare: Predictive models used in policing can label historically over-policed demographics as “high risk,” triggering intensified surveillance that produces the very outcomes used to justify the predictions.
©Wouter van Amsterdam — WvanAmsterdam — wvanamsterdam.com/talks