Video available here.
Powerful new technologies like OpenAI’s “ChatGPT” or Google’s “Bard” have sparked excitement over the potential they have to transform how we work, learn and communicate for the better. But their potential harms also trigger fears and unease. As a result, the public discourse around such large language models (LLMs) can be noisy or chaotic.
CITP has convened a panel of experts from the journalism, tech research and public policy sectors to discuss their experiences with – and approaches to – engaging with these emerging technologies in their respective professions. We will also talk about the responsibilities journalists and academics may have in shaping the public conversation around digital technologies, and how they can support each other’s work for the benefit of the public.
Julia Angwin is an award-winning investigative journalist and contributing opinion writer at The New York Times. She founded The Markup, a nonprofit newsroom that investigates the impacts of technology on society, and is Entrepreneur in Residence at Columbia Journalism School’s Brown Institute. Angwin was previously a senior reporter at the independent news organization ProPublica, where she led an investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2017 and won a Gerald Loeb Award in 2018. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010. In 2003, she was on a team of reporters at The Wall Street Journal that was awarded the Pulitzer Prize in Explanatory Reporting for coverage of corporate corruption. She is also the author of “Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance” (Times Books, 2014) and “Stealing MySpace: The Battle to Control the Most Popular Website in America” (Random House, March 2009). She earned a B.A. in mathematics from the University of Chicago, and an M.B.A. from the Graduate School of Business at Columbia University.
Sorelle Friedler is the Shibulal Family Associate Professor of Computer Science at Haverford College. She served as the assistant director for Data and Democracy in the White House Office of Science and Technology Policy under the Biden-Harris Administration where her work included the Blueprint for an AI Bill of Rights. Her research focuses on the fairness and interpretability of machine learning algorithms, with applications from criminal justice to materials discovery. She holds a Ph.D. in computer science from the University of Maryland, College Park.
Arvind Narayanan is a professor of computer science at Princeton University. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice a recipient of the Privacy Enhancing Technologies Award, and thrice a recipient of the Privacy Papers for Policy Makers Award.
Cosponsors: The Pulitzer Center, CITP’s Digital Witness Lab, Brown Institute for Media Innovation and the Program in Journalism at Princeton University.