The most dangerous prediction systems are those that don’t know what they don’t know. This talk explores the distinction between irreducible randomness and reducible ignorance and why treating the latter as the former leads to failures like the 2008 financial crisis. It covers practical tools for quantifying uncertainty in AI systems: ensemble methods, conformal prediction, and selective abstention. Special attention is given to large language models, where confident-sounding text can mask factual unreliability. The talk concludes with lessons from domains that handle uncertainty well—weather forecasting, healthcare, aviation safety—and organizational practices that reward honesty over false confidence. The central argument: the goal is not to eliminate uncertainty, but to acknowledge it clearly and decide wisely despite it.
Bio:
Stephan Rabanser works on trustworthy machine learning, with a particular focus on uncertainty quantification, selective prediction, and out-of-distribution generalization/robustness. At a high level, his research aims to improve the reliability of machine learning systems under uncertainty and distribution shift. Rabanser develops principled yet practical methods that help models understand what they know—and crucially, when they should abstain—whether by quantifying predictive uncertainty, deferring to expert models, or rejecting unfamiliar inputs. He also studies how models can generalize reliably under distribution shift, with applications ranging from out-of-distribution detection and time series anomaly detection to robustness in federated learning. A recurring theme of his research is to design intelligent systems that remain trustworthy even under imperfect or adversarial conditions, such as privacy constraints, limited data, or non-stationary environments. His current research explores how uncertainty can be designed and leveraged in large generative models to support more reliable decision-making and safer deployment.
Rabanser holds a Ph.D. in computer science from the University of Toronto, an M.Sc. and a B.Sc. in informatics from the Technical University of Munich (TUM), and an Honours Degree in technology management from the Center for Digital Technology and Management (CDTM). Over the past years, he has held engineering and research positions at Amazon / AWS AI Labs and Google. Previously, Rabanser has also been a research visitor at the Massachusetts Institute of Technology (MIT), Carnegie Mellon University (CMU), and the University of Cambridge.
Rabanser’s Google Scholar webpage
In-person attendance is open to Princeton University faculty, staff and students.
This talk will be livestreamed and recorded. The recording will be posted to the CITP website, the Princeton University Media Central channel and the CITP YouTube channel.
If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week prior to the event.
Sponsorship of an event does not constitute institutional endorsement of external speakers or views presented.