Computer vision researchers have a broad set of stated goals, including building robots and autonomous cars, helping automate manufacturing and agriculture, and augmenting healthcare. However, computer vision has also been criticized as enabling deeper and more pervasive forms of surveillance and control.
In this talk, we explore how to assess the intents and impacts of a field as large and complex as computer vision. We will start with audits of computer vision systems to understand biases and toxic stereotypes they encode. We go further and examine top machine learning papers to try to understand why such biases are so pervasive in computer vision systems. Finally, we will examine linkages between computer vision papers and patents to assess the real-world impacts of the field at scale, showing how we can develop tools to connect granular sub-fields, theoretical papers, or institutions to end products. We conclude by discussing how computer vision researchers can use these forms of sociotechnical foresight to enable their work to have impacts that align with their values.
William Agnew is currently a Ph.D. candidate at the University of Washington studying reinforcement learning, planning, robotics, and AI ethics He is advised by Sidd Srinivasa and supported by an NDSEG Fellowship. In his spare time he runs marathons, rock climbs, backpacks, reads, and cooks.