CITP leaders appeared on Capitol Hill, urging members of Congress to prioritize transparency as they develop national policies around artificial intelligence. Princeton experts want to make sure citizens know which developers are behind the A.I. programs they are using, and give researchers and journalists the authority to probe the inner workings of A.I. systems.

Surya Mattu, a data engineer and journalist who leads the Digital Witness Lab at CITP, and CITP Director Arvind Narayanan, a Princeton Professor of Computer Science, both testified on November 1 before the U.S. Senate’s A.I. Insight Forum on High Impact AI, co-chaired by New York Senator Chuck Schumer.

A picture of a U.S. Senate panel with Senator Schumer sitting in the middle and witnesses seated at the long table.

CITP Director Arvind Narayanan and Digital Witness Lab Lead Surya Mattu prepare to give testimony at forum hosted by Sen. Chuck Schumber (D, New York) on High Impact A.I.

“Artificial intelligence is the next frontier in algorithmic injustice,” Mattu told the bipartisan panel. “I urge you to consider how independent scrutiny by journalists and researchers has played a crucial role in checking the power of technology.”

Narayanan, meanwhile, called on the lawmakers to give the public the right to know when A.I. claims made by developers are proven and which are false. “Transparency around efficacy (or lack thereof) may help weed out snake oil products from the market,” said Narayanan, co-author of the A.I. Snake Oil blog and book by the same name.

The November 1 forum was just one in a series that Schumer hosted with senate members Mike Rounds, Martin Heinrich and Todd Young. It specifically focused on areas in which flawed or harmful artificial intelligence systems have had the most significant impacts on Americans’s lives, such as with finance, health care and the justice system, areas Narayanan has researched widely.

The forums also come at a time of strong national momentum around A.I, spurred in large part by President Biden’s October 30 executive order to mitigate the risks of artificial intelligence, including creating standards for A.I. safety, security, privacy and equity.

CITP researchers have been at the forefront of research exposing the shortcomings of A.I. systems and knocking down unfounded claims. Just last month, CITP graduate student Sayash Kapoor was among a team of researchers from Stanford University and the Massachusetts Institute of Technology that developed a new transparency index for generative A.I. models.

Mattu told the panel that government regulation is not sufficient to protect Americans from A.I. harms; independent scrutiny and persistent monitoring by researchers and journalists is also necessary, he said.

Mattu shared that tools he developed in his own work has exposed some of the deceptive practices of some companies.

“I was able to show how a marketing company logged users’ personal data from the sites of clinical health trial providers and mortgage companies,” he said, adding that he has also “documented how the Meta Pixel tool (formerly the Facebook Pixel tool) quietly scoops up people’s data as they use hospital websites, fill out federal student loan applications, and file their taxes online.”

Among other recommendations, Narayanan also suggested in his testimony that the lawmakers build transparency into the government’s procurement practices, saying “it will have a ripple effect that will raise standards across the A.I. industry.”

Other written and verbal testimony came from representatives from the Service Employees International Union, who are concerned with the impact of artificial intelligence on workers; the Fair Housing Alliance, and the Brookings Institution.

See Surya Mattu’s full comments here.

See Arvind Narayanan’s full comments here.