Hilke Schellmann

CITP’s Hilke Schellmann Studies AI’s Impact on Facts, Society

Jan 20 2026
By Ambreen Ali
Source Princeton School of Public and International Affairs

Journalists are tasked with reporting facts about the world, a job that has become much more complicated with the increasing use of artificial intelligence, which has made it easier to manipulate content and disseminate false information. Investigative reporter Hilke Schellmann says that means journalists like her must embrace a more expansive role in helping society discern fiction from reality.

Schellmann is a visiting professional at Princeton University’s Center for Information Technology Policy (CITP) and an associate professor of journalism at New York University. While at CITP, a joint initiative of the Princeton School of Public and International Affairs and the School of Engineering and Applied Sciences, she is working on several projects that explore how AI tools are impacting aspects of society such as hiring practices and healthcare.

In a recent interview, Schellmann discussed that work and her takeaways on how AI is changing the ways we work and live.

What are you focusing on during this academic year at Princeton?

Hilke Schellmann: I am at Princeton during my sabbatical and this is my my time to think about new, strategic, long-term projects that I want to pursue. One of the things that came out of my AI and hiring book (“The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now”) is I realized that a lot of people wonder if there might be ageism in online hiring platforms, but there is very little scholarship on the topic. I'm working with [Princeton SPIA faculty member] Aleksandra Korolova on a project on ageism and whether that is a factor in AI hiring tools, especially in resume parsers, which are used in the first stage of hiring when you submit your resume.

I like the intellectual, interdisciplinary community at CITP, and that has been inspiring to me. A researcher gave a talk on companion chatbots and how people sometimes have seen an effect on their social life if they anthropomorphize the companion chatbot. I also saw a demonstration of a pedagogy professor at NYU showing how she teaches kids about LLMs (large-language models), and it dawned on me that we need a companion chatbot literacy program for kids in schools, because we see these high rates of teenagers and young kids using these tools.

My hope is that we find ways to teach kids to understand how these tools work under the hood, that maybe knowing this, it will take this “magical quality” of some of these interactions away and replace it with a more skeptical, critical understanding of them.

What are you working on related to journalism and AI?

HS: I'm working on different book projects that are all related to AI accountability. One is on how journalism is changing due to AI. The work that I did on the benchmarking AI tools is morphing into a larger book. I was surprised how much feedback I got to this article in Columbia Journalism Review, especially from editors who feel like their news organization wants them to use AI tools, but they don't really have a good grasp on which tool to use. There is a need for benchmarking AI tools for reporters and building a toolkit for reporters showing how you could use AI in a safe way for investigative reporting.

I'm really concerned about information integrity. Anyone who works with facts should be concerned that we live in a time when many people in this country don't share the same factual reality. A lot of people are questioning science and journalism and verification and facts. I'm thinking about whether there is a way to build AI tools to automatically check facts against a whitelist, for example, or use AI agents to help fact checkers and journalists do their work.

How do you think about your role at CITP as a journalist?

HS: We as scientists and academics need to be much more in the public eye to talk about how we can build more trust in facts and have evidence-based practices become part of the mainstream. As investigative journalists, we want to hold powerful people to account. I actually think we need to expand that definition to include AI, because AI obviously makes powerful decisions as well. For me, that means I am broadening the scope of my work.

You’re also collaborating with the journalism program on campus. Tell us about that.

HS: We are planning a panel and a workshop at the end of March that is looking at how AI is changing journalism and teaching students how they can better use AI for journalism projects, as well as when not to use AI.

Within CITP, we are also planning an information integrity workshop in late spring, bringing together technologists and journalists. We live in a time when almost anything can be generated, so how can we make sure that we keep the integrity of facts? For example, how can I make sure that a person I am speaking with as a journalist is a real person on Zoom and not an avatar?

As a journalist, what motivated you to move in this direction?

HS: I think a lot of investigative journalists are driven by this endless curiosity: How is our world already changing? How is it going to change? And how can we influence this change in the public interest? I don't think technology companies are going to solve all problems for us, and we need to figure out how we can use this technology in the public interest.

What are some of your takeaways from your research on AI?

HS: What's interesting to me is how AI is changing our daily lives, especially in high-stakes decisions. Take hiring: It sort of flew under the radar in the AI accountability discourse because traditionally there are a lot of candidates, and a lot of people get rejected. There wasn't much of an awareness of how deeply this field is changing because of AI. It’s pretty clear that, right now, most large companies use AI to reject people in the early stages of hiring. We see this in resume screening and one-way video interviews. Now we also see avatars doing interviews with applicants.

I do think these are high-stakes decision making tools, and there's so much hype in this space. I think the hype around artificial general intelligence is a distraction when we really need to look at how our lives and our work are already changing.

What I found in HR is that, unfortunately, a lot of the first generation of AI tools automate already flawed processes and bring in historical bias or new bias. These tools can be harmful, and we know very little about them because these tools are often black boxes.

The other thing I found is that a lot of the algorithms being used are pretty basic, but that doesn’t make them less problematic. Many HR departments want or need to save money, so they buy these tools from vendors, and then there isn't a whole lot of oversight.

Since the book came out, I have talked to a lot of HR leaders who have shared with me that their companies quietly stopped using an AI tool when they found similar problems to the ones I described. Maybe they were told this tool is revolutionizing hiring and it doesn't create any bias, but they actually saw that it creates bias. We don't see a lot of people talking publicly about it, because of fear of litigation, so problematic tools are not eradicated as soon as possible and often live on because other companies believe the vendors’ promises.