The Enemies of Science Fear Criticism — Scientists Shouldn't
A large-scale international survey—over 71,000 participants across 68 countries—recently examined how people view science. I contributed (in a small way) to this project.
The good news? Across most countries, people trust scientists. Trust was highest in Egypt and India, and lowest in Albania and Kazakhstan. But as we cautioned in the paper:
"Even a small minority’s distrust can distort how scientific evidence shapes policymaking."
And we are already seeing this happen.
A conspiracy theorist recently influenced the White House to fire six National Security Council members. A conspiracy-minded figure currently leads the U.S. Department of Health. Meanwhile, The Joe Rogan Experience—the world’s most popular podcast—regularly features unqualified guests spouting pseudoscience. Actor (!) Terrence Howard, for instance, was given free rein to casually dismiss all of physics and mathematics and claim that “1×1=3” and that gravity doesn’t exist. There was barely a raised eyebrow in the studio, although his performance gave rise to at least one face palm.
Psychology has become a favorite target for science deniers. I was stunned to come across an especially absurd example of this in Flat Earth propaganda. Flat Earthers, in their so-called Flat Earth Science Textbook, weaponize the replication crisis in psychology to argue that "mainstream science" as a whole cannot be trusted.
Yes, the replication crisis—scientists holding themselves accountable—is now being twisted to justify the claim that Earth is a flat pancake surrounded by an ice wall and sealed beneath a dome.
The irony here is both rich and instructive.
The replication crisis didn’t happen because science failed. It happened because science worked. Researchers turned a critical lens on their own practices and sought to improve them. And it's not just psychology; similar processes are reshaping medicine, nutrition, and many other fields.
This raises a tough but necessary question. Are scientists giving ammunition to the anti-science movement by criticizing one another publicly? Should we stay silent, pat each other on the back, and present a united front?
I don't think so. And I’m not alone.
Mickey Inzlicht argues persuasively that no one—regardless of seniority, identity, or status—should be shielded from scientific critique. Shielding ideas from criticism isn’t how science survives. It’s how it dies.
Moin Syed makes a related point. Bad actors may exploit scientific critiques for ideological purposes but silencing critical inquiry would betray the very principles that make science trustworthy and self-correcting.
Science isn’t about defending a tribe; it’s about defending a method—a brutal, beautiful, and self-correcting method. (I get teary-eyed just typing that.) Criticism isn’t a flaw in science; it’s the beating heart of how science works.
There are two key priorities we need to focus on.
First, if we want to protect public trust in science, we shouldn't silence criticism. We should help people understand that scientists earn trust by rigorously scrutinizing each other's work.
Second, we must confront the deeper reasons why people fall into science denial in the first place. Disinformation is everywhere—about science, health, elections, even the basic laws of nature—and it’s costing lives.
Recent research suggests that resistance to science isn’t simply a matter of ignorance. It’s often fueld by loneliness, resentment, and anger. If we are determined to fight back against disinformation, better facts alone won’t be enough. We’ll also need to address the emotional and psychological scars that breed distrust.
In the end, defending science means defending both its methods and its spirit—a relentless pursuit of truth through questioning, testing, and self-correction. We need to show that science is not a cold institution handing down decrees, but a human enterprise driven by curiosity, humility, and courage.
Building trust in science won’t just require better communication; it will require empathy, patience, and a renewed commitment to the values that make science worth trusting in the first place.
The future of science—and perhaps much more—depends on it.



Your link to the large-scale international survey doesn't work, I am assuming it is this https://www.nature.com/articles/s41562-024-02090-5.
I really like this article, and if I had the time I'd expand on my response to it, but to try and boil down my thoughts to their barest essence:
- This seems to position the goal as defending science, but if science were shown to be a bad method, then I wouldn't think it worth defending / holding on to. The goal, to me, is truth. If anyone can show me a better method than the scientific method, then I will adopt that. I accept science itself only provisionally.
- The replication crisis is an example of the practice of science/academia failing, but also how it can correct its failures. The nature and scope of it, where there is a massive inflection point of us suddenly finding out that many methods are less reliable than we thought, is wrong. Detecting it is good, but corrective measures were clearly not being applied for quite a long time.
- I think it is notable that the replication crisis isn't talked about much by people who generally are science-optimistic in public. Despite a lot of claims that had spread into public discourse failing replication, or being much more limited in nature and scope, there is little discussion about the role science played in this misinformation. Given the partisan nature of many of these claims, it makes academia come across as one-sided.
- Science shouldn't require trust, nor should it revolve around credentialism. Terrence Howard is talking about math in the example given, but him being an actor doesn't in and of itself discredit his position. Most lay people lack the background in order to make a good critique, but from their perspective, when they think they have a valid point, dismissing them out of hand based on their profession is both intellectually vacuous and understandably off-putting.
- Due to differences in definitions, I think experts often make incorrect and indefensible statements in discussions with lay people. Economists talking about "efficiency" is a pretty good example of this, where economic efficiency isn't quite what ordinary people typically think about in terms of efficiency.