Each Wednesday, Disinformation Wire founder and editor Paul Brandus brings you a carefully curated newsletter concerning what the World Economic Forum calls one of the gravest issues the world will face over the next decade. More on Paul here.
Why telling people they’re wrong doesn’t work
Why are misleading or flat-out false narratives so easily accepted by some? And why are they seemingly immune to facts? These were among the questions I put to one of the world’s leading experts on disinformation, Stephan Lewandowsky, chair of cognitive psychology at the School of Psychological Science at England’s University of Bristol.
Professor Lewandowsky’s work focuses on our understanding of science and why people often embrace beliefs that are at odds with scientific evidence. I met him at a recent disinformation conference at Cambridge University in England.
Before getting to our conversation, here’s one central observation: Once something has gotten into your brain — a thought, data point, whatever — it is impossible to dislodge. That’s because while a computer may have a delete button, our brains don’t. Once we absorb something, it’s in our “hard drive” forever. And being presented with facts and reasoning often isn’t enough to supersede what got there first.
Here are excerpts from our conversation:
Lewandowsky: The key notion here appears to be twofold. Number one, whenever people are encountering information, they tend to believe it by default. Now, that makes a lot of sense, because 90% of the time when I'm interacting with the world and I talk to people outside, they're going to tell me the truth. If I ask you what time it is, everybody will tell me the truth. So it's a very sensible thing for our system to be built that way, that by default we accept everything as being true.
But what happens if something turns out to have been false? Maybe somebody, by mistake, told me it was Tuesday rather than Monday when I asked them what day of the week it is. Then I have to update my memory and my mental model of the world. And the question then is, how do you do that?
Well, and that's where things become very interesting, complicated, and also concerning when it comes to the welfare of our societies. And that is that in order to unbelief something, people have to identify it as being false. You can't just remove things from memory. Once you have encoded information in memory that you believe, you can't just yank it out. Memory doesn't work that way. There's no delete button.
“…in order to unbelief something, people have to identify it as being false. You can't just remove things from memory. Once you have encoded information in memory that you believe, you can't just yank it out. Memory doesn't work that way. There's no delete button.” -Professor Stephan Lewandowsky
What the mind can do is to identify something as false. And that identification process is complicated. It takes effort, and it is not 100% perfect. It may work 95% of the time, but very often people just get confused if you tell them that something is false.
Brandus: That is a very powerful word, unbelieving, and the human brain is simply not wired to unbelieve, as you say. These things that are not true have deep roots within our brains. It's hard to pull those roots out. And therein lies the problem. And other research that I've read, and I'd like you to comment on this, is that the people do not like to be told that they are wrong. People are conditioned to think that everybody thinks they're smart, intelligent, and all of that, and sometimes get offended or angry when presented with facts that they do not agree with. I suppose that explains all of the yelling and arguments and name calling and so forth that we see every day on social media. People do not like to admit or be told that they are wrong.
Lewandowsky: Indeed, of course. No one does. But unless you are a scientist when it becomes your bread and butter to be told you're wrong and you have to cope with your colleague’s criticisms. But aside from that, it is, of course, very difficult. And it's not just the processes you're talking about where people want to believe something and they think they're smart and they know it all. It's even more fundamental than that. It is very difficult to update your memory. And one of the reasons is that when we go through life and we're encountering information, we're building a mental model of the world around us. We're constructing an image of what's going on inside our heads.
Now, that works pretty well. People are extremely good at extracting a narrative or a storyline from information that they encounter. But the problem is, if you then find out something is wrong in your model of the world, then what are you going to do? You can't just yank out this plank that is supporting an entire edifice of your understanding of the world.
Brandus: So what do you do?
Lewandowsky: So the key thing in trying to correct people is not just to say something is false, but to provide them with an alternative explanation and to say, this is false because X, Y, Z, and by the way, what's really true is this. And then people can shift, they can replace one pillar of a mental model with another, but they can't just rip one out and let the whole thing crumble.
Brandus: I think that makes sense. But the problem, as I see it, professor, is that when people try and do that and say, well, have you read this article about topic A, B, or C? It has some interesting data. And they don't want to read it, or I'm not going to bother. I already know I'm right and don't waste my time with that stuff. They just dig in and they're stubborn, and it's hard to come back to unbelieve. It's hard to get folks to unbelieve. It's very, very hard.
Lewandowsky: Indeed it is. And that is one of the issues that we're working on right now. My team is doing research on precisely that issue. And one of the things that we've discovered that is making people more open to consider other viewpoints is that if you start out by letting them explain their opinions, and if you can then engage with that explanation and you can affirm part of it. So the key thing, if you're confronted with that situation where somebody is refusing to believe the evidence is not to try and tell them, oh my God, you're so wrong. That's not a good strategy.
Brandus: That doesn't work.
Lewandowsky: That doesn't work. But what you can do is you can say, well, you know what? You actually got a point. And then you say, I actually think you're coming from a good place. You have these feelings and you're expressing them, and that's all wonderful. And you're partially right. Oh, but by the way, this belief that you're holding actually turns out to be false.
Now, if you do that sort of layered approach of letting them explain, affirming where that feeling and opinion is coming from, and then you slip in a correction, we can show that in experiments you find that people are more receptive to the new information.
Brandus: Translate all this research that you're doing, which is fascinating by the way, to, say, politics, which is both here in the U.S. and elsewhere confrontational and binary. Each side often sees the other almost in existential terms. How do you apply the theories that you've developed to a problem like that?
Lewandowsky: Well, with a lot of effort and difficulty. It's not easy. But what I just mentioned does apply to political situations as well. But one other comment I want to make about the politics, which to my mind is important, that we really have to look at the leadership of parties involved. This is something that my research has recently shown over and over again, that the leaders set the tone of a conversation. They are the ones who are creating memes. They're the ones who are pushing falsehoods. And I think if we just try to find a way that citizens can become less polarized, I think we can't do that on our own. It must involve the political leadership, and in particular, to be blunt, the leadership of the Republican Party. Because if you look at the data on misinformation, polarization, whichever way you look at it, much of that is driven by one side of politics and not both.
For example, the polarization isn't symmetrical. If you look at the voting records in Congress over the last 150 years or so, which people have done by statistical means, it is the Republican Party that has walked off into a fairly extreme space over the last 10 years. Now, that doesn't help us in solving the problem, but it does help us in bringing it into focus. We really have to address the problem of leadership that is willing to engage in misinformation and polarization for their own purposes.
Brandus: That's a very important point, and this keeps coming up with other people that I've spoken with, is that politicians have a particular responsibility here, because they're in positions of leadership. In the past, we've looked up to and trusted our elected officials. And when they say these sorts of things, act in a certain way that's contradictory to the pillars that uphold our government. It's confusing, demoralizing, bad. Pick your own adjective.
Lewandowsky: All of the above, I think.
Brandus: I'm very grateful for your time, Stephan. Thank you so much.
#####
How Russia laces ChatGPT with pro-Kremlin Disinformation
Earlier this year, the American Sunlight Project (ASP) — led by the wonderful Nina Jankowicz — published a report hypothesizing that the pro-Russia Pravda network, which churns out 3.6M+ pieces of content per year, was expressly designed to groom large language models (LLMs) into regurgitating Kremlin propaganda. That claim has since been confirmed by other research organizations.
ASP’s research has now gone further. It says ChatGPT continues to spit out these falsehoods, despite "knowing" that Pravda is a propaganda network and should not be cited. OpenAI’s allegedly state of the art “reasoning” model still let Pravda content through 28.6% of the time in response to specific prompts—particularly about Ukraine—and 4o cited Pravda content in five out of seven (71.4%) times.
Here’s what happened when ASP asked ChatGPT about LLM grooming and Pravda:
While no one should be surprised that Pravda (which ironically means “truth” in Russian) is a manufacturer and distributor of disinformation, we should be that ChatGPT hasn’t figured this out and put a stop to it yet. Caveat emptor when using ChatGPT and other large language models.
#####
Russia’s “Operation Overload” swamps social media with fake videos
Speaking of Russian disinformation, chances are you’ve been exposed to it and probably don’t even know it. A Moscow-run influence campaign — “Operation Overload” — (also referred to as “Matryoshka” or “Storm-1679”), is quite good at impersonating media outlets, academics and law enforcement. A lengthy study by the Institute for Strategic Dialogue (ISD) says the goal, obviously, is to influence public opinion on the war in Ukraine and sow division in the United States and other democratic countries. Key findings:
Operation Overload is a persistent, high-volume campaign. Its narratives exploit emerging controversies within NATO member countries and Ukraine. It targets these countries with the twin goals of undermining Ukraine’s war effort and destabilizing western democracies.
It creates videos designed to impersonate trusted sources, mimicking scores of organizations. in the first three months of 2025. Three-quarters of those organisations were media outlets, universities or law enforcement. Many videos incorporated real footage and AI-generated voices of journalists, professors and police officers.
One video that falsely claimed USAID paid celebrities to travel to Ukraine, garnered over 4 million views. Others got less attention from real users and relied heavily on a bot network to generate likes and shares. Despite its limited reach, the operation’s high output increased its chances of reaching users, distracting the organizations tagged in posts, and damaging the reputations of those it impersonated.
Efficiencies of scale: the operation often posted the same content across multiple platforms, including X, Telegram and Bluesky.
#####
News deserts — a breeding ground for disinformation — continue to spread
The crisis in American journalism is “more severe and widespread than expected,” says a new report from Muck Rack and Rebuild Local News — a nonpartisan organization advancing public policies to create a stronger and more inclusive local press.

They’re out with a new metric called the “Local Journalist Index” — a national map showing the number of local journalists per 100,000 people in each of the nation’s counties. The findings are both discouraging — and a call to action:
In 2002, there were about 40 journalists per 100,000 residents on average. Today, there are 8.2 Local Journalist Equivalents per 100,000 residents. More than a 75% drop.
More than two thirds of the counties, home to 217 million people, have less than even that already-degraded national average.
For smaller towns, the situation is often bleak. More than 1,000 counties do not have the equivalent of even one full-time journalist covering the community.
We’re not talking just rural areas. How could we, when the study says 217 million Americans are underserved? The study notes that many big cities like Los Angeles, Houston, Phoenix, San Antonio, the Queens borough of New York City, Riverside, Las Vegas, San Bernardino and Dallas all have reporter averages that are half of the already-way-too-low national average.
As I’ve noted before and will keep mentioning, the absence of local, boots-on-the-ground journalism is the best way to keep a community informed and engaged. Who will cover city hall, the school board, potholes on your street, crime, and all the rest? How will you stay informed?
#####
Stat of the Week - Garbage in, garbage out with LLMs
Large-language models (LLMs) like Open AI’s Chat GPT, Alphabet’s Gemini and others are growing rapidly and have hundreds of millions of users. But as the above-mentioned story of Russian LLM grooming notes, users can be manipulated easily by what a prompt churns out. The old saw — garbage in, garbage out — can still apply.
NewsGuard, for example, recently tested GPT-4 and its predecessor, GPT-3.5, by prompting each with 100 known false narratives. GPT-3.5 generated 80 of these narratives, while GPT-4 produced all 100, often presenting them in more convincing formats such as news articles and social media posts.
The finding highlights the dual nature of advanced AI models like GPT-4: while they offer improved capabilities, they also pose increased risks in disseminating disinformation.
Photo of the Week: The collapse of authenticity
Meet Mia Zelu, a self-described fashion influencer who posted a series of photos of herself at Wimbledon last week. They quickly got tens of thousands of likes.
What’s not to like? A beautiful woman. A picture-perfect day. A cocktail.
It’s all fake. There’s no such person as Mia Zelu. She’s a figment — well, more than that, actually — of generative AI’s imagination.
When anything can be faked, how do you know what’s real?
I want to hear from you
Information integrity is such a huge and vital topic. I want to hear what you think.
Here’s my email: DisinformationWire@yahoo.com. Tell me your stories, share your thoughts, send me tips, questions and ideas. I can’t promise that I’ll be able to respond to every email, but I do read each and every one. If you are not a subscriber, I hope you’ll become one. Paid subscribers DO get my personal responses. I’m very appreciative of your interest in this important subject. Thank you.
#####
Thanks Paul. Used much of Lewsndowsky’s research in our NSF funded project and found it very helpful. Side note: sometimes it similar to being a car salesman, you have to
listen to your customer to find out what sparks their interest. No point in trying to sell someone a Chevy when they really liken the Ford design, or an Olds, , or a Mercedes, etc. Listen to their wants, desires, pocketbook issues.