abstract planets

Princeton SPIA Experts Contribute to Global Report on AI Safety

Feb 26 2025
By Ambreen Ali

A recently released U.K. government report on artificial intelligence (AI) safety draws on contributions from 100 AI experts across 33 countries and intergovernmental organizations—including members of the Center for Information Technology Policy (CITP) at the Princeton School of Public and International Affairs.

Billed as “the world’s first comprehensive synthesis of current literature of the risks and capabilities of advanced AI systems,” the report outlines how AI could exacerbate risks such as the manipulation of public opinion, biological and chemical attacks, environmental impact, copyright infringement, and loss of privacy.

While it stops short of making policy recommendations, it does address how to mitigate these risks and offers a shared international understanding around these topics. Contributors from Princeton SPIA include Arvind Narayanan, professor of computer science and CITP director; Edward Felten, Robert E. Kahn Professor of Computer Science and Public Affairs, emeritus; and Sayash Kapoor, a computer science Ph.D. candidate at CITP. Jonathan Barry, MPA ’25, was a project manager for the report.

A few weeks after the report’s release, U.S. Vice President J.D. Vance spoke at the AI Action Summit in Paris, signaling a shift in U.S. policy from focusing on AI safety to seizing AI opportunity.

“The Trump administration believes that AI will have countless, revolutionary applications in economic innovation, job creation, national security, health care, free expression, and beyond. And to restrict its development now would not only unfairly benefit incumbents in the space, it would mean paralyzing one of the most promising technologies we have seen in generations,” Vance said.

Kapoor said the report serves as an important evidence-based overview of AI risks that should be considered as countries embrace the AI opportunity.

“Over the last couple of years, there was a drastic shift from focusing on evidence-based safety to arguments based on speculation and existential risks of AI. I think part of what the vice president was pushing back against was this narrative that, in order to make AI safer, we need to shut it down,” Kapoor said. “But the risk in explicitly disavowing safety is that companies will take ever more risky approaches to productionizing the technology.”

Kapoor led a section of the report on open AI foundation models, based on a CITP workshop in September 2023 that focused on the risks and opportunities of such models. He also co-authored an article in Science on this topic, as well as a book with Narayanan, “AI Snake Oil.” which explores the risks and hype surrounding AI.

In his contribution to the U.K. report, Kapoor makes the case that, when considering how safe these models are, one should consider the marginal risk involved compared to what’s already available through existing technology.

For example, in 2023, a class at the Massachusetts Institute of Technology showed that malicious actors could use a large language model to create a bioweapon. But it turned out that all the pieces of information involved in that exercise were also available on Wikipedia, Kapoor said.

“If we are trying to analyze the real risks of openness, we should focus on the marginal risk,” Kapoor said. “AI is like many other technologies we use. It has both positive capabilities and malicious uses.”

Overall, Kapoor said the value of the report is to show a global consensus on AI safety risks across a wide set of experts with “wildly different opinions.”

“Even laying out where experts disagree is quite helpful, because those often tend to be the areas where we need to do more work to deepen our understanding,” he said.