scientific censorship

New paper explores censorship and self-censorship in science

Institutional Structures and Norms Exacerbate Censorship. They Can Help Fight It Too.

Censorship is widespread in academe and has grown worse in recent decades. Indeed, the expressive environment in higher ed seems less free than in society writ large, even though most other places of employment have basically no protections for freedom of expression, conscience, research, etc.

Almost everyone is opposed to censorship in the abstract, but when confronted with concrete examples of ideas they personally find offensive – liberals confronted with arguments about the relationship between genes and inequality or conservatives being made to reckon with queer theory, for instance — people are often more supportive of censorship in practice than their generic views would suggest.

Many presume censorship is mostly driven by right-wing agitators, such as Fox News, or else by lefty  “kids these days” who don’t properly understand or value academic freedom. However, as we (and our co-authors) demonstrate in a new study published in the Proceedings of the National Academy of Sciences, censorship is more typically driven by scientists themselves.

Consider the results of a recent national survey of faculty at four-year colleges in America: 16% of faculty had been disciplined or threatened with discipline for their teaching, research, talks or non-academic publications. Depending on the issue being discussed, between 6% and 36% of faculty supported soft punishment (condemnation, investigations) for peers who make controversial claims, with higher support among younger, more left-leaning, and female faculty supporting these measures. 34% of professors had been pressured by peers to avoid controversial research; 25% reported being “very” or “extremely” likely to self-censor in academic publications; and 91% reported being at least somewhat likely to self-censor in academic publications, meetings, presentations, or on social media.

Those who are more institutionally isolated or vulnerable – such as non-tenured faculty or professors who belong to certain underrepresented ideological or demographic groups — are more likely to be successfully censored by others, and to self-censor as a means of avoiding that outcome. Censorship and self-censorship are more pronounced in certain fields – most notably the humanities and social sciences. Certain topics are also much more likely to provoke censorship, particularly those tied to contentious moral and political issues, especially for scholars whose findings diverge from dominant views.

The motives behind censorship are often commonly misunderstood. Sometimes scientists censor one-another in the context of power struggles or for other unsavory reasons. Most of the time, however, benign motives are at play.

Many academics self-censor to protect themselves – not just because they’re concerned about preserving their jobs, but also out of a desire to be liked, accepted, and included within their disciplines and institutions, or don’t wish to create problems for their advisees. Other times, scholars attempt to suppress findings because they view them as incorrect, misleading, or potentially dangerous. Sometimes scientists try to squash public dissent of contentious issues for fear that it undermines public trust or scientific authority, as happened at various points during the COVID-19 pandemic.

Moral motives have long influenced scientific decision-making. What’s new is that journals are now explicitly endorsing moral concerns as legitimate reasons to suppress science. Following the publication (and retraction) of an article reporting that higher proportions of male (vs. female) senior collaborators were associated with higher post-collaboration impact for female junior authors, Nature Communications released an editorial promising increased attention to potential harms. A subsequent Nature editorial stated that authors, reviewers, and editors must consider potentially harmful implications of research, and a Nature Human Behavior editorial declared the publication might reject or retract articles that have potential to undermine the dignities of human groups. In effect, editors are granting themselves vast leeway to censor high-quality research that offends their own moral sensibilities, or those of their most sensitive readers.  

It is reasonable to consider potential harms before disseminating science that poses a clear and present danger, such as scholarship that increases risks of nuclear war, pandemics, or other existential catastrophes.

However, the suppression of scientific findings and ideas often has significant adverse consequences as well. Censorship can durably limit our understanding of important phenomena and slow down progress to solving significant problems, causing people to struggle, suffer and die needlessly. It can lead to misinformation cascades or null entire fields of research (because the people who would like to declare that the emperor has no clothes are locked out of the conversation), wasting enormous amounts of resources and effort that could be better directed elsewhere. Although scientists sometimes quell dissent to preserve their perceived authority, if this suppression becomes public knowledge, it tends to dramatically undermine public trust in scientific findings and the scientific community.

We could more effectively balance the risks of information dissemination against the costs of censorship by creating empirical and transparent measures of purported harms, rather than the current approach of leaving relying on the often arbitrary intuitions and authority of small and unrepresentative editorial boards.

We could increase accountability in peer review by making the review and decision-making process as open as possible. Reviews and editorial decision letters could be provided in online repositories available to all scholars (with reviewer and editor names redacted if appropriate). Professional societies could make available the submissions, reviews, and acceptance/rejection decisions for their conferences (perhaps with identities redacted). This would allow scholars to discern double standards in decision-making that often mask censorship against disfavored views or lax approaches to work that advances preferred narratives. And as a consequence of this increased transparency, editors and reviewers may become more consistent and careful in their decision-making. Studies show that people behave in less biased ways when other people can more easily observe disparities, or when people might have to explain their decisions to potentially unsympathetic others.

Scholars could increase a sense of accountability for peer-reviewers and editors by conducting more regular audits of journals in their fields. Scholars have long submitted nearly identical papers to journals, changing only things that should not matter (like the author’s name or institutional affiliation), or reversing the direction of the findings (all else the same), to test for systematic variance in whether the papers are accepted or rejected and what kinds of comments the reviewers offer based on who the author is or what they find. Up to now, studies like these have provided important insights into how censorship works and against whom it is deployed. However, if scholars were more consistent and systematic in auditing journals in their fields, it would become easier to compare journals against one another and highlight publications that are especially biased or objective in their decision-making.

 Scholars could also conduct large-scale surveys of scientists who have submitted to various journals to evaluate perceived procedural fairness. Some journals (e.g., Proceedings of the National Academy of Sciences) already survey submitters on relevant questions. However, to our knowledge, none share this information publicly at present. If journals were pushed to collect and publish this data more consistently, it would allow scholars in a field to understand their colleagues’ perceptions about bias at various journals and may help scientists more effectively target their work to publications where it would get a fair hearing.

Collectively, measures like these could create new forms of competition among scientific journals. New metrics could be created based on these data to tie the reputations of journals, editors and peer reviewers to how open and fair their publication practices are. Scholars who are doing important and groundbreaking work would likely seek out journals that are most credibly objective, while biased journals would likely see decreased submissions of high-quality, high-impact work, thereby reducing other metrics that stakeholders already care about for publications with low-quality reviewing practices (and likely spurring reforms).   

Just as there are institutional and cultural factors that can make censorship and self-censorship more pronounced, there are measures that we can take to render censorship easier to perceive and more reputationally costly. Let’s try some of those instead of what we’re currently doing.

Co-authored with Cory Clark
Originally published 11/20/2023 by Chronicle of Higher Education

Pages: 1 2 3 4 5


Related