Video available here.
Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups. This talk will provide an overview of research building a theory of “responsible” machine learning. It will highlight a notion of fairness in prediction, called Multicalibration (ICML’18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data. This “multi-group” approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections. Additionally, a new paradigm will be presented for learning, Outcome Indistinguishability (STOC’21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility. Finally, the threat of Undetectable Backdoors (FOCS’22) will be discussed which represent a serious challenge for building trust in machine learning models.
Bio:
Michael P. Kim is a postdoctoral research fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser. Before this, Kim completed his Ph.D. in computer science at Stanford University, advised by Omer Reingold. Kim’s research addresses basic questions about the appropriate use of machine learning algorithms that make predictions about people. More generally, Kim is interested in how the computational lens (i.e., algorithms and complexity theory) can provide insights into emerging societal and scientific challenges.
To request accommodations for a disability please contact Jean Butcher, , at least one week prior to the event.