Researchers at Princeton CITP interrogate digital technologies in order to understand how they impact, influence and interact with users and society — and to make those interactions better. Princeton CITP — with experts at Stanford University and the Massachusetts Institute of Technology – unveiled a paper detailing The Foundation Model Transparency Index, or FMTI. That’s a tool for measuring just how forthcoming 10 of the world’s top tech developers have been with their most salient generative AI applications, including OpenAI’s GPT-4, Meta’s Llama 2 or Amazon’s Titan Text.

Sayash Kapoor, a computer science graduate student, who coauthored the joint FMTI research paper, said the findings were disappointing. He told PC Magazine that “While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media.” PC magazine covered the launch of the index in OpenAI, Meta, and Google Score Appallingly Low on Stanford’s New AI Transparency Test.

The researchers collected a variety of data to create the transparency score, including whether the companies shared how much they pay their workers, and the environmental impacts of their models. Coverage of the index appeared in multiple news outlets, including in Axios, The New York Times and IEEE.