The emerging field of artificial intelligence (AI) safety has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced AI while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. We contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices.
We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through career advising and web forums; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.
In this talk, in-progress research will be presented from two collaborations with Ahmed’s CITP colleagues Klaudia Jaźwińska, Amy Winecoff, Archana Ahlawat, and Mona Wang, in which they investigate the epistemic culture of AI safety and the emergent work practices of people who are focusing on the sub-field of AI alignment.
Bio:
Shazeda Ahmed graduated with a Ph.D. from the University of California Berkeley School of Information.
She is a current fellow in the Transatlantic Digital Debates at the Global Public Policy Institute. She was a pre-doctoral fellow at two Stanford University research centers, the Institute for Human-Centered Artificial Intelligence (HAI) and the Center for International Security and Cooperation (CISAC), and has previously worked as a researcher for Upturn, the Mercator Institute for China Studies, Ranking Digital Rights, and the Citizen Lab, and the AI Now Institute.
Ahmed was a Fulbright fellow at Peking University’s Law School in Beijing, where she conducted field research on how tech firms and the Chinese government are collaborating on the country’s social credit system. Her additional work focuses on perceptions of algorithmic discrimination and emotion recognition technologies in China, as well as applications of artificial intelligence in Chinese courtrooms.
Her work on the social inequalities that arise from state-firm tech partnerships in China has been featured in outlets including the Financial Times, WIRED, the South China Morning Post, Logic magazine, TechNode, The Verge, CNBC, and Tech in Asia.
Attendance at CITP Seminars is restricted to Princeton University faculty, staff and students.
If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.
This talk will not be recorded.