ai snake oil

EXCERPT: Princeton SPIA AI Experts Separate Hype from Substance in New Book

Sep 24 2024
By Tom Durso
Source Princeton School of Public and International Affairs

Artificial intelligence – the ubiquitous AI that seems to be everywhere, all the time – has both captivated and frightened the world. Making sense of one of the most hyped phenomena in recent memory is no small task. Who better to help than two members of Time magazine’s most 100 influential people in AI?

In their new book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (Princeton University Press), Arvind Narayanan and Sayash Kapoor offer commonsense advice on how to tell good AI from junk AI.

“We are surrounded by AI hype. But while the AI industry has many critics, they often paint with too broad a brush. We wrote this book because we felt there is room for more technically informed commentary. And as AI researchers who try to bring accountability to the industry, we felt we were in a good position to provide it.”
Arvind Narayanan, director of Princeton SPIA’s Center for Information Technology Policy and professor of computer science

Narayanan and CITP graduate student Kapoor offer an essential understanding of how AI works and why it often doesn’t, where it might be beneficial and where it might be damaging, and when one should suspect that companies are using AI hype to sell AI snake oil – products that don’t work and likely never will.

“AI Snake Oil” is no anti-technology screed. The authors acknowledge the potential of some types of artificial intelligence, explain why AI isn’t an existential risk, and point out that the danger doesn’t come from the technology, but from those who use it. At the same time, they are frank about the harms AI is already causing and the disproportionate control of AI by mostly unaccountable big tech firms.

“By equipping readers with a basic understanding of the different flavors of AI,” Narayanan says, “we hope to make it easier to navigate the claims about AI developments that we encounter every day.”

• • •

Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter—how they work, what they’re used for and by whom, and how they fail—they couldn’t be more different.

Chatbots, as well as image generators like Dall-E, Stable Diffusion, and Midjourney, fall under the banner of what’s called generative AI. Generative AI can generate many types of content in seconds: chatbots generate often-realistic answers to human prompts, and image generators produce photorealistic images matching almost any description, say “a cow in a kitchen wearing a pink sweater.” Other apps can generate speech or even music.

Generative AI technology has been rapidly advancing, its progress genuine and remarkable. But as a product, it is still immature, unreliable, and prone to misuse. At the same time, its popularization has been accompanied by hype, fear, and misinformation.

In contrast to generative AI is predictive AI, which makes predictions about the future in order to guide decision-making in the present. In policing, AI might predict “How many crimes will occur tomorrow in this area?” In inventory management, “How likely is this piece of machinery to fail in the next month?” In hiring, “How well will this candidate perform if hired for this job?”

Predictive AI is currently used by both companies and governments, but that doesn’t mean it works. It’s hard to predict the future, and AI doesn’t change this fact. Sure, AI can be used to pore over data to identify broad statistical patterns—for instance, people who have jobs are more likely to pay back loans—and that can be useful. The problem is that predictive AI is often sold as far more than that, and it is used to make decisions about people’s lives and careers. It is in this arena that most AI snake oil is concentrated.

AI snake oil is AI that does not and cannot work as advertised. Since AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. This is a major societal problem: we need to be able to separate the wheat from the chaff if we are to make full use of what AI has to offer while protecting ourselves from its possible harms, harms which in many cases are already occurring.

Excerpted from AI SNAKE OIL: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Copyright © 2024 by Arvind Narayanan and Sayash Kapoor. Reprinted by permission of Princeton University Press.