Can algorithms help judges make fair decisions? After all, human judges can often be biased—so should we try to use ostensibly neutral technology instead? In a recent interview with WHYY, Philadelphia’s public radio station (a member of NPR), CITP fellow Annette Zimmermann comments on the ethical implications of using AI in criminal justice and other key public institutions. Zimmermann argues that we are never completely “done” with AI ethics: instead of checking whether an algorithm meets certain fairness criteria once at the design stage, ethical thinking about algorithms has to be an ongoing process of deliberation, which continues after we deploy AI tools in the real world. As Zimmermann points out in her comments featured on WHYY’s science and innovation podcast “The Pulse,” algorithms—like humans—will make mistakes, not all of which we can foresee when we design technology. Algorithmic models can interact with the social world in complex ways, and while it is common to think of data is a kind of social mirror that simply reflects human biases (“garbage in, garbage out”), data is actually more like a magnifying glass that could amplify inequality if left unchecked. Determining what is fair, then, is decidedly an ethical and political question: algorithms themselves cannot conclusively tell us what fairness requires.