Attendance restricted to Princeton University faculty, staff and students.
Machine learning algorithms are widely used for decision making in societally high-stakes settings from child welfare and criminal justice to healthcare and consumer lending. Recent history has illuminated numerous examples where these algorithms proved unreliable or inequitable. This talk will show how causal inference enables us to more reliably evaluate such algorithms’ performance and equity implications. In the first part of the talk, it will be demonstrated that standard evaluation procedures fail to address missing data and as a result, often produce invalid assessments of algorithmic performance. A new evaluation framework is proposed that addresses missing data by using counterfactual techniques to estimate unknown outcomes. Using this framework, we propose counterfactual analogues of common predictive performance and algorithmic fairness metrics that are tailored to decision-making settings. We provide double machine learning-style estimators for these metrics that achieve fast rates & asymptotic normality under flexible nonparametric conditions. Empirical results will be presented in the child welfare setting using data from Allegheny County’s Department of Human Services.
In the second half of the talk, we propose novel causal inference methods to audit for bias in key decision points in contexts where machine learning algorithms are used. A common challenge is that data about decisions are often observed under outcome-dependent sampling. We develop a counterfactual audit for biased decision-making in settings with outcome-dependent data. Using data from the Stanford Open Policing Project, it will be demonstrated how this method can identify racial bias in the most common entry point to the criminal justice system: police traffic stops. To conclude, the work is situated in the broader question of governance in responsible machine learning.
Amanda Coston is a PhD student in Machine Learning and Public Policy at Carnegie Mellon University (CMU). She is interested in how machine learning and causal inference can improve decision-making in societally high-stakes settings. She is particularly interested in how to make decision-making systems more reliable and more equitable.
Her research addresses real-world data problems that challenge the reliability of algorithmic decision support systems and data-driven policy-making. A central focus of her research is identifying when algorithms, data used for policy-making, and human decisions disproportionately impact marginalized groups. Much of her work uses doubly-robust techniques for bias correction. She is advised by Alexandra Chouldechova and Edward H. Kennedy.
Amanda is a Rising Star in EECS, Machine Learning and Data Science, Meta Research PhD Fellow, NSF GRFP Fellow, K & L Gates Presidential Fellow in Ethics and Computational Technologies, and Tata Consultancy Services Presidential Fellow. Her work has been recognized by best paper awards and featured in The Wall Street Journal and VentureBeat.