By Karen Rouse
For a video of Kenia Hale’s presentation HERE.
Kenia Hale, an Emerging Scholar at Princeton’s Center for Information Technology Policy, traveled to Doha, Qatar last month to present her work on advocating for digital technologies that are just and equitable, at the invitation of the United Nations’ Accelerator Labs program.
“I was invited as a subject matter expert on discriminatory and harmful AI practices, and to advise researchers on how to achieve equitable and just AI futures,” said Hale, whose own research centers on the environmental implications of artificial intelligence, or AI. “Given that many countries are trying to utilize more AI technologies, they wanted me to advise them about potential issues, in an attempt to avoid them.”
During the 4-day trip, Hale gave a talk entitled “Towards an Equitable and Just AI Future” and mentored delegates on how AI technology could support UN goals, like protecting wildlife and oceans and reducing discrimination. It was all part of the Artificial Intelligence for Collective Intelligence conference, which included representatives from more than 91 countries.
Organizers reached out to Hale, who represented both Princeton CITP and the Princeton Ida B. Wells Just Data Lab, after hearing about a special CITP event, Tech in Conversation: Imagining Radical Tech Futures, which Hale organized and moderated in April that focused on envisioning technologies that empower Black and brown communities often excluded in tech spaces.
At the conference in Doha, Hale’s talk touched on ways in which AI is effective, and where it has been problematic, such as in privacy violations and over-policing. Her presentation also drew from the research of Princeton experts Arvind Narayanan, a professor of computer science and CITP’s incoming director; Professor of African American Studies Ruha Benjamin, who is a member of CITP’s associated faculty and the founding director of the Ida B. Wells Just Data Lab; and Mihir Kshirsagar, who leads the CITP Tech Policy Clinic.
“I had delegates come up to me and say, “ ‘This is the kind of research we need to be focusing on,’ “ Hale recalled.
Hale also discussed the ways in which artificial intelligence could be used to address global hardships — like preventing flooding in Somalia, tracking online violence against women in Uruguay, designing solar panels in Lebanon, or monitoring air pollution in Kazakhstan.
Hale called the trip eye-opening.
“It mirrored the ways in which, for many … in the United States, the words ‘Artificial Intelligence’ illicit images of progress and the future,” Hale said. However, she added, “both in the U. S. and internationally, there aren’t enough conversations about the potential harms and biases encoded into these technologies.”