Events
Loading Events
:

,

Cancelled – CITP Luncheon Speaker Series: Aylin Caliskan – Brand New AI, Same Old Biases: Cultural Stereotypes in Visual-Semantic Embeddings


Date:
Tuesday, November 28, 2017
Time:
12:30 pm

Location

Sherrerd Hall, 3rd Floor Open Space
+ Google Map

This event was cancelled.

This talk will not be livestreamed or videotaped.

No RSVP required for current Princeton faculty, staff, and students. Open to members of the public by invitation only. Please contact Jean Butcher at  if you are interested in attending a particular lunch.

In this talk, we will investigate cultural stereotypes in computer vision models and then search for ways to approach transparency, accountability, and fairness in vision systems. Computer vision systems are already deployed in various domains but a detailed discussion of the decision making processes, their effects on society, and policy have not been carried on in the computer science, ethics, or social science communities. State-of-the-art vision systems combine vision models with natural language models to form “visual-semantic spaces” that have led to big improvements in tasks such as captioning images and answering visual questions. In a recent paper we showed that semantic word embeddings absorb cultural stereotypes [1]. Here, we study the same question in the context of joint visual-semantic embeddings. Joint embeddings are useful for image retrieval and such systems are likely to reflect existing biases such as a doctor being male and a nurse being female. A more worrying example is a video analytics system that could return results biased toward some racial groups when queried for suspicious activity. Such vision systems present great advances for artificial intelligence but at the same time might perpetuate or even amplify existing biases. The quantitative methods we develop will help detect, understand, and mitigate such biases.

[1] Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases.” Science 356.6334 (2017): 183-186.