By Karen Rouse

Headshot of researcher Peter Henderson wearing light blue or gray zip-up jacket and crossbody bag.

Peter Henderson

Peter Henderson, a Stanford University-based researcher whose combined expertise in law and machine learning is helping federal agencies deploy responsible, equitable and efficient artificial intelligence algorithms to best serve their constituents, will join the Princeton Center for Information Technology Policy in January as an assistant professor.

Henderson will have joint appointments with Princeton’s School of Public and International Affairs (SPIA) and the Department of Computer Science and be based at Sherrerd Hall, alongside CITP core associated faculty members Aleksandra Korolova, Jonathan Mayer, Prateek Mittal and Arvind Narayanan, CITP director.

Henderson — who earned his J.D. from Stanford Law School and is completing his Ph.D. in computer science at Stanford University — said he is looking forward to joining CITP, an international hub where faculty, visiting fellows, professionals and graduate students collaborate on research that improves interactions between digital technologies and society.

“I think CITP is pretty special in the fact that it is sitting near the computer science department and bringing together folks from SPIA and other departments,” Henderson said. “It creates this really interesting interdisciplinary emphasis.”

Bolstering CITP’s Legal Heft

With his background in computer science and law, Henderson’s work not only aligns with CITP’s mission to research digital technologies for the good of society, it also bolsters CITP’s legal heft: CITP Tech Policy Clinic Lead Mihir Kshirsagar is a former attorney with the New York Attorney General’s Bureau of Internet & Technology, while Mayer is also Stanford Law School graduate and a former chief technologist of the Federal Communications Commission Enforcement Bureau.

CITP Director Narayanan welcomes his expertise: “Peter Henderson is an expert on copyright questions raised by generative AI, which are some of the most urgent, profound, and impactful questions being debated in tech policy today.”

Just last month, The New York Times Opinion piece A Creator (Me) Made a Masterpiece With A.I. cited Foundation Models and Fair Use, a paper in which Henderson was the lead author. He and his coauthors looked at copyright infringement — examining the potential legal and ethical risks or complications tied to the growing use of foundation models that have been trained on copyrighted material, particularly when proper attribution is not given. Their research included a review of U.S. case law.

Machine Learning for the Real World

Henderson’s work is notable for its hands-on approach to developing algorithms for use in real-world scenarios. As part of his research into how federal agencies use artificial intelligence, Henderson and his team embedded with Internal Revenue Service workers to observe how the agency uses machine learning in its auditing of American taxpayers.

They identified ways in which the agency could improve its use of machine learning — to serve constituents more equitably and efficiently — which the IRS is now incorporating into its operations. Henderson discussed his work in April when he gave a talk at CITP: Aligning Machine Learning, Law, and Policy for Responsible Real-World Deployments (watch below).

“To my mind, it’s really important to partner with the organizations to the extent possible,” Henderson said in a recent interview. “I think within the organizations, people at agencies– the civil servants often—really want to improve the agency and make what they’re doing better.”

Henderson said there is a need for such research to improve how the government and other agencies use artificial intelligence, or machine learning, because as the use of machine learning has skyrocketed, so have instances of misuse or error, which can hurt the public. He noted last year’s international childcare benefits crisis in the Netherlands in which a Dutch taxing authority relying on AI wrongly identified thousands of innocent citizens, many of them immigrants, as fraudsters.

“There was supposed to be a human in that loop who was checking the decision of the algorithm,” Henderson said. “Instead, they were rubber stamping it and that led to real people losing important benefits.” The harm is compounded when there is a subsequent domino effect, he added. “You’re flagged for fraud in this benefits claim context, which means that other parts of the government will see that you’ve been flagged.” That error can have repercussions for the citizen, including losing other public services they are legitimately due.

Henderson’s aim is to avoid such fiascos by understanding the regulatory and legal frameworks in which federal agencies and workers function to serve the public, and using that information to design machine learning technology that is not just technically accurate in its design, but also aligns with the agency’s regulatory and policy framework.

“We want to make sure that these models are well-evaluated so that we’re giving a realistic picture of their performance to stakeholders like policymakers, agency representatives and the broader public,” he said.

His goal is to continue working with external partners like federal agencies, local agencies or nonprofit organizations.

“That’s a big chunk of my core research,” Henderson said. “By working in these settings, it can help us understand gaps in fundamental machine learning research, advancing the state of the field, while having real-world positive impact.”

The Intersection of AI, Law and Civil Rights

Henderson’s roots are in computer science. He was a software engineer before getting his masters degree, working at Alexa and Amazon Web Services (AWS). Henderson has also had a longtime interest in civil rights and human rights, which prompted him to seek a law degree.

“When I started my Ph.D., I was really focused on core machine learning, reinforcement learning, and natural language processing,” he said, “I also have an interest in fundamental law and policy work around civil rights and human rights, so my research often extends there as well.”

While at Stanford, he co-led the Domestic Violence Pro Bono Project, which provides legal services to the victims of domestic violence, and he represented clients who were given lengthy prison sentences for minor crimes as part of the Three Strikes Project.

Henderson, who earned his master of science degree at McGill University and the Montréal Institute for Learning Algorithms, also contributed to the Stanford Native Law Pro Bono Project to help improve Native Americans’ experiences with the law and policing.

He said he hopes to continue his civil rights and community work when he joins Princeton in 2024.

Henderson gave the following talk at CITP in April 2023: