Dialogues in AI and Work

Overview

Artificial intelligence (AI), machine learning (ML), and data-driven predictive models increasingly govern our interactions with the world of work. The use of AI to distribute resources and deliver opportunities can benefit the few or the many, perpetuating existing systemic biases or overturning them, depending on whose voices are at the table and how different perspectives on these technologies are implemented into design and governance. An entrepreneur may regard AI as a mechanism for disrupting established industries or for maximizing profit. People from traditionally marginalized communities may see AI as a system for perpetuating harmful stereotypes or for exacerbating existing social and financial inequalities. Engineers and data scientists may regard AI as one of many types of software they are expected to develop as a part of their day-to-day tasks, and government employees may regard AI as a mechanism for improving the efficiency of their workflows.

The Princeton Dialogues in AI and Work is a research agenda investigating what algorithmic and predictive data-driven tools mean to stakeholders across society. Building on prior work in the Dialogues in AI and Ethics case study series, the current phase of research focuses on the different communities will interact with, be represented by, and be implicated by, algorithmic technologies in different ways. These studies will be undertaken through the lens of work, where “work” is broadly understood to encompass entrepreneurial organizations, gig labor, and the work of governance. Through this effort, we seek to understand the goals, incentives, and constraints of a broader ecosystem of stakeholders and how these different stakeholder perspectives can be incorporated into novel strategies for designing, building, and governing algorithmic systems which can more equitably serve all.

Team

Our team is made up of researchers, data scientists, and students with expertise in sociology, computer science, political science, economics, and psychology. We take an interdisciplinary approach to answering questions about the intertwined relationships between AI and Work using a wide range of methodologies.

Elizabeth Anne Watkins (Lead)
Postdoctoral Research Associate
Photo of Elizabeth Watkins

Orestis Papakyriakopoulos
Postdoctoral Research Associate
Photo Orestis Papakyriakopoulos

Amy Winecoff
Data Scientist
Photo of Amy Winecoff

Klaudia Jaźwińska
Emerging Scholar
Photo Klaudia Jazwinska

Christelle Tessono
Emerging Scholar
Photo Christelle Tessono

Open Call to Stakeholders, Researchers, and Students

If you are involved, have interacted with, or have experience in any of the following areas of interest and are interested in getting involved on a volunteer basis, especially if you’re a member of the Princeton community, we would like to hear from you! Please fill out this online form delineating your interests and how you would like to get involved, found here.

Ongoing Projects

AI & Early Stage Technology Companies

Research groups in both industry and academia have developed numerous guidelines and tools for the ethical use of AI; however, most of these tools were developed without input from technology workers who develop AI as a part of their job. As a result, ideas about ethical AI development have had little practical impact on the way engineers build, evaluate, and monitor AI-enabled technologies. For tools to be useful, they must fit within the existing processes of workers and function with the practical demands these workers face. For founders and early employees of AI-related startups, external pressures to build and deploy products quickly and rapidly scale their businesses places unique demands on these individuals. In our project on AI and early stage ventures, we seek to understand how early stage ventures think about AI in general and what pressures inform how they think about and use AI in their projects. How are AI tools conceived and configured by the work of entrepreneurship? In other words, how do institutional pressures in the early-venture space shape the data-driven solutions that organizations make for the marketplace?

If interested in participating, please contact Amy Winecoff, or Elizabeth Watkins, .

AI and Labor

Our goals for this segment of research are to bring a wide diversity of stakeholders around online labor platforms into dialogue: from labor advocacy and workers protections, to tech and Human-Computer interaction practitioners, the cities which support and host these exchanges, the public which relies on platform services, to policy makers concerned with this space. In a larger sense, this dialogue will consider how policy and design can be developed in tandem, to complement each others’ strengths and power in the marketplace from a collective perspective. We’re working with the Princeton HCI group, headed by Andrés Monroy-Hernández and associated students and postdocs, to build bridges between policy and design, to explore how these societal tools can complement each other towards a future of more equitable, just, and fair labor platforms.

If interested in participating, please contact Elizabeth Watkinsj, or Orestis Papakyriakopoulos, or Ashley Gorham, .

Explainable AI in Public Agencies

A number of city agencies use tools which deploy complex data analysis to draw inferences or make predictions about the city using public data, inferences which are then used in agency decision-making, resulting in impacts on citizens and communities. These tools go by a variety of names, including algorithmic decision-support, and in some cases machine learning or artificial intelligence. There is little insight, however, into how such technologies are integrated into agency work-flows, and how they result in either changes to, or support for, agency decision-making. There is also little insight into how city employees understand these systems, and whether, and how, they explain these systems, their resulting outputs, and their augmentation of agency decisions, in terms of how they impact citizens’ life chances and life outcomes. Using the lens of “explainable AI” (XAI), this study will use ethnographic methods of interviews and analysis of documentation to identify what city employees understand of the tools they’re using and how they integrate these tools into their decision-making. This research will take a sociotechnical approach, viewing the algorithmic tools as embedded with webs of social and technical actors, to open black boxes both of algorithmic systems and human decision-making. How do algorithmic systems augment the work of governance? How do these tools change agency decision-making, and how can we leverage descriptive research to inform best practices for how this work gets done?

If interested in participating, please contact Elizabeth Watkins, or Meg Young, .

Related Research From the Team on AI, AI governance, and AI and Work

Cameron, Lindsey, Angele Christin, Michael Ann DeVito, Tiwanna Dillahunt, Madeleine Elish, Mary Gray, Noopur Raval, Rida Qadri, Melissa Valentine, and Elizabeth Anne Watkins. “This Seems to Work: Designing Technological Systems with The Algorithmic Imaginations of Those Who Labor.” CHI 2021. https://dl.acm.org/doi/abs/10.1145/3411763.3441331

Elish, Madeleine Clare and Elizabeth Anne Watkins. Repairing Innovation: A Study of Integrating AI in Clinical Care (New York: Data & Society Research Institute, 2020), https://datasociety.net/pubs/repairing-innovation.pdf>

Engelmann, Severin, Jens Grossklags, and Orestis Papakyriakopoulos. “A democracy called facebook? participation as a privacy strategy on social media.” In Annual Privacy Forum, pp. 91-108. Springer, Cham, 2018. https://link.springer.com/chapter/10.1007%2F978-3-030-02547-2_6

Khaziev, R., P Washabaugh, B Casavant, Amy A. Winecoff, and M Graham. arXiv draft. Recommendation or Discrimination?: Quantifying Distribution Parity in Information Retrieval Systems. https://arxiv.org/abs/1909.06429

Lucherini, E., Sun, M., Amy A. Winecoff., & Narayanan, A. (2021). T-RECS: A Simulation Tool to Study the Societal Impact of Recommender Systems. arXiv preprint arXiv:2107.08959. https://arxiv.org/abs/2107.08959

Papakyriakopoulos, Orestis. “Political machines: a framework for studying politics in social machines.” AI & SOCIETY (2021): 1-18. https://link.springer.com/article/10.1007/s00146-021-01180-6

Papakyriakopoulos, Orestis, Arwa Michelle Mboya. “Beyond Algorithmic Bias: A Socio-Computational Interrogation of the Google Search by Image Algorithm.” https://arxiv.org/pdf/2105.12856.pdf

Sherman, J., C Shukla,S Zhang, R Textor, and Amy A. Winecoff. 2019. Assessing Fashion Recommendations: A Multifaceted Offline Evaluation Approach. In FashionX RecSys ‘19, September 20, 2019, Copenhagen, Denmark. https://arxiv.org/abs/1909.04496

Watkins, Elizabeth Anne. “Took a Pic and Got Declined, Vexed and Perplexed: Facial Recognition in Algorithmic Management.” CSCW 2020 Companion. https://doi.org/10.1145/3406865.3418383

Watkins, Elizabeth Anne. “The Tension Between Information Justice and Security: Perceptions of Facial Recognition Targeting.” Joint Proceedings of the ACM IUI 2021 Workshops. http://ceur-ws.org/Vol-2903/IUI21WS-TExSS-16.pdff

Winecoff, Amy A., F Brasoveanu, B Casavant, P Washabaugh, and M Graham. 2019. Users in the Loop: A Psychologically-Informed Approach to Similar Item Retrieval. In RecSys ‘19, September 16-20, 2019, Copenhagen, Denmark. https://dl.acm.org/doi/abs/10.1145/3298689.3347047

Winecoff, Amy A., Sun, M., Lucherini, E., & Narayanan, A. (2021). Simulation as Experiment: An Empirical Critique of Simulation Research on Recommender Systems. arXiv preprint arXiv:2107.08959. https://arxiv.org/pdf/2107.14333.pdf.