Professor Arvind Narayanan is sitting and discussing his work at CITP. He is wearing a light blue shirt and glasses. There is a patterned wall behind him.

Professor Arvind Narayanan speaks with Quanta Magazine

The Researcher Who Would Teach Machines to Be Fair, a story in Quanta Magazine, highlights the work of Professor Arvind Narayanan in exposing privacy violations by tech companies. It includes a Q & A, video and photos of Narayanan at work at CITP and the Princeton campus. Narayanan also discusses faulty artificial intelligence technologies that have been used to make claims about peoples’ behaviors, ultimately harming some of society’s most vulnerable.

“These kinds of algorithms are used by banks for lending. They’re used by insurance companies. This is happening in hiring when people apply for jobs,” Narayanan, a professor of computer science, told Quanta, a math and science publication. “This is happening even in the criminal justice system when it comes to decisions about bail and parole. When you look at the accuracy of these models, it’s barely better than random. So, is it ethical to make these consequential decisions about people when they’re only slightly more accurate than the flip of a coin.”

Narayanan’s comments are grounded in CITP research: Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy, a recently-released paper co-authored by CITP-affiliated researchers Angelina Wang, Sayash Kapoor, Solon Barocas and Narayanan, outlines multiple flaws in machine learning systems that nonprofit, medical, legal and other agencies rely on to make judgments about clients.

Narayanan also said in the Quanta interview that as a child who was good at math and solving puzzles, he thought technology was a force for good. Later, he said, he learned “that what tech does is it amplifies the best and the worst of society.”

Read Narayanan’s full interview with Quanta and watch the video on Youtube.

—CITP Communications Manager, Karen Rouse