Bias in AI Reading Group - Ben Laufer - Regulation Along the AI Development Pipeline for Fairness, Safety and Related Goals

Date
Apr 30, 2025, 11:00 am12:00 pm
Location
AI Lab Conference Room 274, 41 Williams Street
Audience
Current Princeton faculty, staff, and students.

Details

Event Description

Machine learning (ML) and artificial intelligence (AI) systems are designed within a broader ecosystem involving multiple actors and interests. This talk focuses on attempts to regulate the AI development process to make these technologies fair, safe, performant, or otherwise aligned with social ends.

The talk starts with a discussion of one proposal for ML regulation, stemming from U.S.  disparate impact doctrine, which compels plaintiffs or firms to search for a “less  discriminatory alternative” (LDA), an alternative policy that meets the same  business needs but exhibits lower disparate impacts across protected populations.  Defining this concept for data-driven decision-making might open up a promising  avenue for regulation, however, a number of technical challenges remain. Laufer will  provide a set of formal results characterizing the ‘multiplicity’ of model designs  and the limits and opportunities for searching for LDAs. [based on joint work w/  Manish Raghavan, Solon Barocas]

More generally, AI is often deployed in a way that requires a general-purpose  producer to adapt to a number of different domains. Laufer will put forward a model of  how regulation would operate in this sort of process. Reasoning about the  interaction between regulators, general-purpose AI creators, and domain  specialists suggests that even straightforward and modest regulatory measures  can backfire, inadvertently undermining safety outcomes. Conversely, stronger  regulations, applied strategically along the development pipeline, can boost both  safety and performance outcomes. [based on joint work w/ Jon Kleinberg, Hoda  Heidari]

The talk will conclude with a discussion about the role for formal models in  building actionable regulatory frameworks for AI.

Bio: 

Benjamin Laufer is a Ph.D. student in the School of Computing and Information Sciences at Cornell Tech, where he is advised by Helen Nissenbaum and Jon Kleinberg, and affiliated with the AI, Policy and Practice Group and the Digital Life Initiative. He is interested in data-driven algorithmic systems and their implications for the public interest. His research uses tools and methods spanning statistics, game theory, network science, and ethics. Prior to joining Cornell, Laufer worked as a data scientist at Lime, where he applied machine learning to urban mobility decisions. He graduated from Princeton University with a B.S.E. in Operations Research and Financial Engineering with minors in Urban Studies and Environmental Studies.

Laufer's research is supported by a LinkedIn Fellowship. He has also spent time at Microsoft Research with the Fairness, Accountability, Transparency and Ethics Group. He was named a "rising star" by Stanford in Management Science and Engineering.

The Bias in AI reading group meets to discuss various fairness issues that emerge in artificial intelligence. Anyone is welcome to present either their own work or other work in the space for the group to discuss. You may access the full schedule, reading list, and signup sheet(Link downloads document) (Link opens in new window) directly. Please sign up for the mailing list(Link is external) (Link opens in new window) to receive updates. 

Please contact Kara Schechtman at [email protected] or Amaya Dharmasiri [email protected] for further information.

Sponsorship of an event does not constitute institutional endorsement of external speakers or views presented.

Sponsor
Center for Information Technology Policy