You are here: American University College of Arts & Sciences News AI DebunkBot Effectively Persuades People from Believing Conspiracy Theories

Contact Us

Battelle-Tompkins, Room 200 on a map

CAS Dean's Office 4400 Massachusetts Avenue NW Washington, DC 20016-8012 United States

Back to top

Research

AI DebunkBot Effectively Persuades People from Believing Conspiracy Theories

An interview with DebunkBot creator and AU Psychology Professor Thomas Costello

By  | 

News headline reads 'Fake News.' Image created with generative AINewspaper headline reads 'Fake News.' Image created with generative AI

Conspiracy theories are spreading on an unprecedented scale across the Internet and social media, including outlandish stories about alien abductions, government assassination plots, and politicians who can “geoengineer” the weather. More and more, these stories are weaponized for political purposes, leading many experts to conclude they are a real threat to democracy.  

Thomas CostelloBut now there might be hope. American University Psychology Professor Thomas Costello is part of a Massachusetts Institute of Technology (MIT) team of scientists who have created DebunkBot, an artificial intelligence bot that chats pleasantly with users while respectfully and factually debunking their conspiracy beliefs.  

DebunkBot has a strong track record of persuasion. This is no easy feat, says Costello, who points out that people who believe that conspiracy theories are “really hard to persuade and don't often change their minds.” DebunkBot has captured the imaginations of scientists and politicians and journalists — as well as lots of ordinary people who have visited the site to test its effectiveness on debunking their favorite conspiratorial beliefs.  

So, in this era of rampant misinformation and conspiracy theories, can AI bots like DebunkBot make a difference? In this interview, Costello answers questions about AI, human nature, and the battle over the truth. 

Can you tell us a bit about DebunkBot and what it's designed to do?  

DebunkBot is based on a research study that was published in Science. We used GPT-4 Turbo, which at the time was OpenAI’s most advanced large language model, to engage more than 2,000 conspiracy believers in personalized, evidence-based discussions. Participants were asked to describe a conspiracy theory they believed in, using their own words, along with evidence supporting their belief. 

GPT-4 Turbo then used this information to persuade users that their beliefs were untrue, adapting its strategy for each participant’s unique arguments and evidence. These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute specific evidence supporting each person’s conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology’s development.

How did things turn out?

The conversations lasted about eight minutes on average, and at the end, people rated whether they still believed in their conspiracy. On average, people reduced their conspiracy belief by about 20 percent, and one-in-four people became actively skeptical toward their conspiracy.  

Most participants developed at least a little bit of skepticism about their conspiracy. We followed up with them, and even two months later, their newfound skepticism was still present. Two months was the last time we checked, so presumably it lasted for even longer than that. 

DebunkBot has gotten a lot of attention, hasn’t it?

Yes. The DebunkBot website was actually secondary to our research paper published in Science—we just created the site for people to try it out for themselves. We actually thought the New York Times was doing a story on the Science paper, but they heavily featured the DebunkBot part, so the bot got a lot of traffic.  

I think 65,000 people have tried it so far. People seem into it. It’s been featured in media outlets from The LA Times to The Guardian, and I was interviewed on BBC World News and NBC Nightly News

So, why are some people drawn to conspiracy theories? 

Conspiracy theories are just descriptive claims about the world. So, let’s say someone is claiming that 911 wasn't a terrorist attack perpetrated by al Qaeda, but rather, it was organized by the US government. They think there's evidence supporting this.  

Most people don't go out and test every hypothesis themselves. They rely on experts and other sources they trust. So, if they're exposed to information that is wrong but seems plausible, like a conspiracy theory, they might believe it, especially if it resonates with other things they believe are true. So, if you're someone who doesn't trust elites and government and institutions, you might be particularly prone to conspiratorial beliefs.  

That's one angle, and then another is that people can get psychological value from believing in conspiracy theories. If you're afraid and believe that the world is dangerous and random and chaotic, it's almost a comforting idea that that there's order in the world, even if that order is something like an evil secret government.  

And then a third angle is group allegiance. If members of your group share a belief, you're also likely to believe it. 

Why do you think that misinformation is so prevalent right now?  

Bad beliefs, misinformation, and polarization have always been a problem. The democratization of information that came with the Internet, where we're not getting our facts from the same, shared sources, is a newer problem (or, at least the scale is new). Around the advent of the Internet, many people were optimistic that humans would be able to do their own research and come to their own good conclusions. But doing that sort of epistemic work may not be in our nature, and the Internet isn’t set up to stop people from drowning in random people’s opinions. 

Jonathan Swift once famously said that a lie can travel halfway around the world while the truth is still putting on its shoes. The Internet has really supercharged that phenomenon. People are able to spread misinformation, and it goes viral.  

Another issue is partial truths. For example, there was a lot of fake news about COVID vaccinations during the pandemic that contained straight-up false information. And of course that’s harmful, but it wasn’t super widespread.  

More impactful were real news stories that were true but ultimately misleading, like one in the Chicago Tribune with the headline, “Healthy Doctor Dies After Getting COVID Vaccine.” A doctor did in fact die after getting the covid vaccine—but probably not because of the vaccine. The story went viral, and millions of people saw it and changed their beliefs a little. And so that's not really misinformation, but those kinds of stories probably caused more harm than misinformation. 

Human beings are not always thoughtful, don’t always stress-test beliefs against plausible counterarguments, and don't necessarily try to see the other side’s perspective. Fractionating into polarized groups only amplifies this. My hope is that AI can begin to act as a really effective counter. It can get around the world just as quickly as a lie (or a half-truth). 

Where do you see this all heading? Is this our new normal—or could things get even worse?  

We worry about the “bullshit asymmetry principle,” where it's easier to spread lies than to debunk them. AI is a tool for combating that because it's able to say in real time, “Here's why you're wrong,” or “this isn't trustworthy information.” People don't need to do the human cognitive labor to combat the misinformation themselves. This is an optimistic version of the future where AI helps people think more clearly.

There are also pessimistic versions predicting that AI will spread misinformation more quickly than AI can stop it, and it's hard to know what will happen. But I like using AI as a form of epistemic hygiene, almost like we brush our teeth every day. Maybe if we hear a wacky claim, we can go check with ChatGPT and see what it thinks. And you don't have to trust it completely. You can just use it to get another perspective.

When you created DebunkBot, what sources did you use? And what makes people, especially conspiracy-minded people, trust them?  

Good question. The sources we use are from the training data of the model. So, things that GPT-4 learned were true and reliable. We had a fact checker go through the AI’s claims to make sure what it was telling people was true and to look for political bias. We randomly chose 128 claims by the AI and had the fact-checker investigate each of them—127 were rated as “true,” one was “misleading,” and none were “false.” There was no evidence of political bias. That’s a really solid track record. 

Do you think people will use DebunkBot going forward?

One nice thing about this tool is that it doesn't judge you or make you feel bad for having a conspiracy belief. And if we were doing it outside of a research context, it would also be totally private. And it differs from an argument—when you're talking to a bot, the only reason to be doing it is to find out what's true. 

What’s next for your research?

I'm very excited about what behavioral science and psychology might be capable of with AI in terms of experiments. AI opens a lot of interesting creative doors—AI in the loop and talking to individual research participants, one on one, not just for persuasion, but for all kinds of things. I'm excited about using these new AI tools to study human beings.

I’m moving my research beyond conspiracy theories to other attitudes that people hold. So, whether they prefer iPhones or Androids or how they feel about immigration or gun control, or any kind of epistemically questionable, non-conspiracy beliefs. The point of doing all of this is to map out the kinds of beliefs that are responsive to evidence versus those that aren't, which will help us understand why people have their beliefs.

How will you use AI to do this?

For a long time, psychology research was done with one-on-one in-person interactions that took place in a room, where you'd have an experiment or deliver a treatment to a human being. It was hard to scale, and a lot of information got lost because no one wrote down everything that happened. 

And so, the field moved to online studies, where you pay people to participate, and you can do experiments, but they're a little less realistic and more constrained. They can’t easily be personalized.  

AI can marry these two waves and interact one-on-one with people where everything is recorded, preserves the creativity of original psychology research, and preserves the scalability of online research. We're expecting some big breakthroughs.