CITP Launches the Digital Witness Lab
Research Hub to Help Journalists Track Bad Actors on Platforms
Princeton University’s Center for Information Technology Policy (CITP) is excited to announce the launch of the Digital Witness Lab — an innovative research laboratory where engineers will design software and hardware tools to track the inner workings of social media platforms, and help journalists expose how they exploit users’ privacy and aid in the spread of misinformation and injustices globally.
Based at CITP’s Sherrerd Hall office, the Lab is led by Surya Mattu, an award-winning data engineer and journalist whose most recent project with The Markup resulted in “Facebook Is Receiving Sensitive Medical Information from Hospital Websites,” an investigative news story that revealed 33 hospital websites and seven health system patient portals were collecting patients’ sensitive patient data through Facebook’s Meta Pixel code.
Mattu’s work at the Lab is two-fold. He will build relationships with journalists who investigate harms in digital technology to collaborate on stories — focusing initially on India and Brazil, where misinformation campaigns have heavily influenced elections. Second, Mattu will work with those journalists to build tools to help them capture data and document bad actors on platforms that manipulate user’s personal information.
Most journalists don’t know how algorithms make decisions because that information is hidden in proprietary software and apps, and companies have no obligation to share that data, said Mattu, who was most recently a University of Michigan Knight-Wallace Fellow. “Injustice often lurks in the shadows of digital platforms.”
The Digital Witness Lab bypasses such obstacles by building custom software and hardware to capture data from these platforms, said Mattu, who earned a Master of Electronics and Communications Engineering from the University of Nottingham, and a Master of Professional Studies from New York University’s Tisch School of the Arts. Journalists can then work with CITP researchers to report out the ways in which sites perpetuate biases and inequalities in society.
Mattu’s first undertaking at CITP is WhatsApp Watch — a research project in which Mattu will monitor public WhatsApp groups to document the spread of misinformation.
The work of the Digital Witness Lab aligns strongly with CITP, where the mission of academics and researchers is to study how digital technology impacts society — including harms.
“We are excited to welcome Surya into the Princeton CITP community,” said Tithi Chattopadhyay, the Center’s executive director. “We look forward to building relationships with journalists and newsrooms that don’t have access to the types of digital tools Surya has a record of developing to support the critical work of investigative reporters. We are excited about the real world impact his work will have.”
###
Learn more about the Digital Witness Lab in CITP’s Q & A with Journalist and Engineer Surya Mattu
You have said that the Digital Witness Lab collects data “in the public interest.” Can you talk more specifically about how this data collection work benefits the public?
Mattu: This could be a Facebook post for housing that excludes people based on their demographic group. It could be an algorithm used to sort employment resumes where only one type of person passes the screening. In criminal justice, it could be a risk assessment algorithm that penalizes Black defendants more than white defendants in criminal sentencing.
Algorithmic decisions in systems like these take place in proprietary software and apps. Companies have no regulation that requires them to reveal how their systems work or the results they produce at scale.
Can you give us an example of how you’ve done this in the past?
Mattu: At ProPublica, I built a browser extension to collect data on how advertisers use Facebook’s behavioral ad targeting to target individuals. The data collection campaign revealed to us the use of the “ethnic affinities” categories that Facebook used as a proxy for race. Through our crowdsourcing campaign, we were able to show how Facebook allowed digital redlining on its platform by making it possible for advertisers to exclude African-American users from receiving housing advertisements. In 2022, Facebook finally agreed to stop discriminatory advertising on the platform.
At The Markup, I built “Blacklight,” a real-time website privacy inspector. Blacklight allows anyone to enter a website to receive a live scan that reveals all of the user-tracking technologies used on the site—and who’s receiving this data. This tool provided evidence for multiple investigations including a look into how anti-abortion clinics collect highly sensitive information from would-be parents and how state-run vaccination websites tracked people as they booked appointments for vaccines. Human Rights Watch also used Blacklight to measure surveillance in children’s online learning tools in 48 countries across the world.
Here, at Princeton Center for Information Technology Policy, you’re developing tools for your latest project, WhatsApp Watch. It looks like the culprit is misinformation spreaders.
Mattu: Correct. I am leading the development of tools that persistently monitor and evaluate content shared on public groups for signals and patterns that reveal the spread of misinformation. Researchers and journalists have already documented a variety of strategies that are used to spread misinformation, including sharing inauthentic content and hashtag campaigns that manipulate what is trending on social media platforms such as Twitter.
Our goal is to conduct studies in India and Brazil that monitor groups over time to detect any emerging strategies used to manipulate messaging around election campaigns, traditional media narratives and other parts of our information ecosystem.
Where do you get the information to report out the story?
Mattu: Data in our research can come from crowdsourced data donations, programmatic data collection or public records requests. Usually, it’s some combination of these methods. The WhatsApp project relies on data donations and programmatic collection.
What harms are you ultimately exposing by building tools?
Mattu: The harm can take many forms including: surveillance of children through online learning tools, tracking of users online for the purposes of targeted advertising without their consent, algorithmic recommendation of content that is deceptive and designed to manipulate and polarize people and perpetuation of discriminatory practices through predictive models.
Depending on the investigation, an incident could be a platform profiting off ads for vaccine misinformation, or political groups being promoted to Facebook users after the company pledged to stop.
Who is part of your team at the Digital Witness Lab?
Mattu: The lab builds on work I have been doing at ProPublica, Gizmodo’s Special Project Desk, and The Markup over the past decade. It was started by me and Micha Gorelick, a public interest technologist and data journalist. We worked together at The Markup on the Citizen Browser project. She has also worked at the Organized Crime and Corruption Reporting Project and co-founded Fast Forward Labs, an applied machine learning lab in New York. We will also initially work with freelance journalists and editors to start producing stories.
How do journalists get on board?
Mattu: We are reaching out to some journalists directly, but journalists can certainly reach me directly at to learn more about WhatsApp Watch.
The Center for Information Technology Policy is a nonprofit, nonpartisan, interdisciplinary hub where researchers study the impact of digital technologies on society with the mission of informing policymakers, journalists, researchers, and the public for the good of society. CITP’s research priorities are Platforms & Digital Infrastructure, Privacy & Security, and Data Science, AI & Society.
-Karen Rouse, CITP Communications Manager
November 7, 2022