
It’s a truism that AI is the worst it will ever be. But while model capabilities do tend to improve over time, users’ experience of AI as a consumer product is affected by many other factors. In a draft paper, Akash Kapur and Arvind Narayanan adapt Cory Doctorow’s enshittification thesis to AI, and argue that degradation of quality is possible or even likely. Some of the reasons why AI products might get worse are familiar ones, such as market power due to user lock-in, while others are new, arising from geopolitical pressures on AI companies. The talk will discuss the symptoms of enshittification and potential ways to avert this future.
Bio:
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a newsletter of the same name which is read by 60,000 researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan was one of TIME’s inaugural list of 100 most influential people in AI. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).
In-person attendance is open to Princeton University faculty, staff and students.
This talk will be livestreamed and recorded. The recording will be posted to the CITP website, the Princeton University Media Central channel and the CITP YouTube channel.
If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu.
Sponsorship of an event does not constitute institutional endorsement of external speakers or views presented.