scientific censorship

New paper explores censorship and self-censorship in science

Measuring Censorship is Difficult. Stopping it May Be Harder

In a new paper for the Proceedings of the National Academy of Sciences, we, alongside colleagues from a diverse range of fields, investigate the prevalence and extent of censorship and self-censorship in science.

Measuring censorship in science is difficult. It’s fundamentally about capturing studies that were never published, statements that were never made, possibilities that went unexplored and debates that never ended up happening. However, social scientists have come up with some ways to quantify the extent of censorship in science and research.

For instance, statistical tests can evaluate “publication bias” – whether or not papers with findings tilting a specific way were systematically excluded from publication. Sometimes editors or reviewers may reject findings that don’t cut the preferred direction with the preferred magnitude. Other times, scholars “file drawer” their own papers that don’t deliver statistically significant results pointing in the “correct” direction because they assume (often rightly) that their study would be unable to find a home in a respectable journal or because the publication of these findings would come at a high reputational cost. Either way, the scientific literature ends up being distorted because evidence that cuts in the “wrong” direction is systematically suppressed. 

Audit studies can provide further insight. Scholars submit identical papers but change things that should not matter (like the author’s name or institutional affiliation) or reverse the direction of the findings (leaving all else the same) to test for systematic variance in whether the papers are accepted or rejected and what kinds of comments the reviewers offer based on who the author is or what they find. Other studies collect data on all papers submitted to particular journals in specific fields to test for patterns in whose work gets accepted or rejected and why. This can uncover whether editors or reviewers are applying standards inconsistently that shut out perspectives in a biased way.

Additionally, databases from organizations like the Foundation for Individual Rights and Expression or PEN America track attempts to silence or punish scholars, alongside state policies or institutional rules that undermine academic freedom. These data can be analyzed to understand the prevalence of censorious behaviors, who partakes in them, who is targeted, how these behaviors vary across contexts, and what the trendlines look like over time.

Supplementing these behavioral measures, many polls and surveys ask academic stakeholders how they understand academic freedom, their experiences with being censored or observing censorship, the extent to which they self-censor (and about what), or their appetite for censoring others. These self-reports can provide additional context to the trends observed by other means – including and especially with respect to the question of why people engage in censorious behaviors.

One thing that muddies the waters, however, is that many scholars understand and declare themselves as victims of censorship when they have not, in fact, been censored.

For instance, rejection from a journal for legitimate reasons, such as poor scientific quality, is not censorship – although there could be censorship at play if the standards reviewers and editors hold papers to varies systematically depending on what authors find and which narratives the paper helps advance.

Likewise, it’s not censorship if your work, upon publication, is widely trashed or ignored. No one is entitled to a positive reception.

Granted, peer responses to a paper may be unfair or a product of unfortunate biases. A hostile response to particular findings may dissuade other scholars from publishing similar results. And the reception of published work can have career implications for scholars: well-received works can be career enhancing, while poorly-received works have the opposite effect. Nonetheless, there is no censorship at play unless one’s scholarship is prevented from publication, or there are campaigns post-publication to punish the author for their study (through formal or informal channels) or have the work retracted or suppressed.

Work ignored upon publication has not been censored either. The overwhelming majority of published research receives few reads, even fewer citations (especially if we exclude self-citations), and makes no meaningful impact on the world. This is the outcome people should generally expect for their scholarship, for better or for worse. If someone experiences the modal result for their published work (it gets ignored), this should not be assumed to be a product of unjust bias. And even where there is  dissemination bias” at play (systematic variance in whether papers are read, shared, cited, included in meta-analyses, or receive media coverage based on whether they advance or undermine a particular narrative), this is an importantly different problem from censorship.

Likewise, it’s not censorship if scholars engage others in mocking, disrespectful or uncharitable ways and are generally greeted with hostility in turn. There are many “crybullies” in the culture war space who characterize reasonable pushback to their own aggressive behaviors as political persecution.

Nor is it censorship if scholars advocate for a particular position while violating academic rules and norms and these violations result in censure. Such punishments could approach censorship if standards are enforced inconsistently. It would likewise be censorious for people to try to dig up dirt on the author of a publication they disliked to have them punished for ostensibly unrelated offenses, or to have spurious investigations launched to make their lives miserable.

It is also necessary to distinguish between self-censorship that arises from real and highly costly threats versus self-censorship driven by cowardice or inaccurate information. Often there is plenty of room for people to dissent from prevailing views without significant adverse consequences, but scholars refuse to speak out regardless because they because they misperceive the magnitude or likelihood of sanction, or because they are unwilling to incur even mild risks to speak their minds (although we often compare ourselves to the likes of Galileo, in fact, higher ed may have unusually high concentrations of cowards, conformists and careerists). These aren’t instances of censorship where other people are the problem. The problem in these cases is largely in the mind of the self-censor.

By carefully working through the best available data on censorship in science, sifting genuine cases of suppression from culture war chaff, some general patterns emerge.

One of the most striking patterns is how often censorship is driven by scientists themselves.

Typically, when people think or talk about censorship we imagine external authorities (like governments or corporations), or perhaps campus administrators or overzealous students. We often understand censors to be driven by ignorance, ideological authoritarianism or a desire to suppress findings that are inconvenient for someone’s political project or bottom line.

In fact, censorship and self-censorship seem to be most typically driven by prosocial motives. Sometimes scholars self-censor or suppress findings because they worry that claims will be easily misunderstood or misused. Sometimes they self-censor and instruct their advisees to do the same out of a desire to avoid creating difficulties for their colleagues and students. Sometimes findings seem dangerous or unflattering to populations that are already stigmatized, vulnerable or otherwise disadvantaged, and scientists suppress findings out of a desire to avoid making their situation worse (although, in practice, censorship often ends up having the most dramatic and pernicious effects on these very populations).  

Critically, it isn’t just censorship that works this way. Many other academic problems tend to be driven by prosocial motives as well.

As psychologist Stuart Ritchie demonstrates in Science Fictions(Metropolitan Books, 2020), academics who commit fraud often seem genuinely convinced that the narratives advanced by their papers are, in fact, true. Fraud is often motivated, in part, by a desire to amplify what scientists believe to be the truth when their experiments fail to provide the expected confirmatory data. In other cases, scholars are convinced that a new treatment or intervention can help people, but they feel like they need eye-popping results to draw attention or secure funding for it – leading them to either massage the data or overhype their findings.  

And as Lawrence Lessing shows in America, Compromised (University of Chicago Press, 2018), it is often scholars who are sincerely committed to honesty and rigor who end up being corrupted – and it is precisely their high sense of integrity that often blinds people to the ways they end up compromising their work.

This is precisely what makes many problems with the state of science difficult to address. They often aren’t caused by bad scientists driven by evil motives but by researchers trying to do the right thing in ways that ultimately undermine the scientific enterprise.

To reduce censorship and self-censorship, it’s not enough to create robust protections for academic freedom. We must also convince scientists to use those freedoms to follow the truth wherever it leads and to tell the truth even when doing so seems to conflict with other priorities.  

Co-authored with Nicole Barbaro.
Originally published 12/5/2023 by Inside Higher Ed.
Syndicated 12/14/2023 by Real Clear Science.

Pages: 1 2 3 4 5


Related