This week’s Work-in-Progress talk is from Angelina Wang and Sayash Kapoor, graduate students in COS and CITP. Their talk will discuss their systematic critiques of the increasingly widespread use of machine learning to predict outcomes for individuals. You can read their full talk abstract below for more details.
We formalize predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. For example, recidivism prediction algorithms such as COMPAS use ML to predict whether an individual will re-offend in the future. Our thesis is that predictive optimization raises a distinctive, and particularly serious, set of normative concerns. To test this, we begin by reviewing 418 reports, articles, and webpages from academia, industry, non-profits, governments, and modeling contests, and find 78 real-world examples of predictive optimization. We narrow these down to 8 particularly impactful examples in order to evaluate the potential risks of deploying predictive optimization. Simultaneously, we assemble a set of normative and technical critiques of specific applications in the literature. Our key finding is that these critiques are applicable to the swath of applications, are not easily evaded by redesigning the systems, and thus challenge the legitimacy of their deployment. Taken together, our results serve both as a toolkit for scholars and advocates who seek to resist an uncritical proliferation of predictive optimization, as well as a checklist of risks that decision makers must explicitly account for when deploying predictive optimization.