vox slogan

Vox’s Consistent Errors on Campus Speech, Explained

Part II

Let me start by saying that in some respects, it is a strange debate between Beauchamp, Yglesias and I:

In the highly-polarized political environment in which we find ourselves, it seems to be a standard assumption that if someone is criticizing one position, it must be because they personally hold the opposite view themselves. For instance, if I am criticizing Beauchamp and Yglesias’ essays “proving” there is no speech crisis, it must be because I believe there is one, right?

Yet I close my recent essay, “Vox’s Consistent Errors on Campus Speech, Explained,” as follows:

Beauchamp and Yglesias insist that the burden of proof is on those who declare there is a crisis. I happen to share this conviction.

San Diego State University psychologist Jean Twenge, NYU social psychologist Jonathan Haidt, HxA Research Director Sean Stevens, FIRE President Greg Lukianoff, sociologists Bradley Campbell and Jason Manning, and others have responded to this challenge by offering compelling – albeit preliminary — evidence that there is a significant normative shift underway among contemporary young people with regards to free expression and other issues (here, here, here, here, here, here, here).

However, in my personal view, more (and different) evidence needs to be marshalled by proponents in order for their case to be fully persuasive. And further research is being done — both within Heterodox Academy and beyond. In the meantime, my position, as I stated in the initial piece, is that the jury is still out on the extent of any normative change – but it is probably unhelpful to refer to it as a “crisis” in any case.

In short, I have no issue with Yglesias and Beauchamp’s skepticism regarding the campus free speech “crisis.” I am somewhat skeptical myself. The problem I have is with the specific evidence they attempted to deploy to “prove” there is no crisis.

Specifically, I argued that Yglesias failed to control for straightforward confounds in his analysis of the GSS data – and when these are controlled for, it seems like contemporary students may actually be less tolerant of those they disagree with than previous cohorts. But don’t just take my word for it: political scientists April Kelly-Woessner (Elizabethtown College) and John Sides (George Washington University) have also published essays underscoring this point using the same GSS data that Yglesias relied on.

But I also cautioned that, across the board, the General Social Survey provides (at best) weak evidence with regards to this dispute. Why? Because the GSS has such a small sample of college students in any given year that it would be irresponsible to generalize much from it. In 2016, for instance, they had roughly 32 enrolled students who fell within the “iGen” age group (which is the cohort which Twenge et al. have argued hold different values on free speech — the position Yglesias seems to be taking himself to refute).

Obviously, one cannot make sound claims about the millions of iGen students and their values on the basis of surveying a few dozen of them — but this the best the GSS can muster right now. In other words, even if Yglesias’ analysis didn’t suffer from important confounds (which it does), the data he relied on could not rebut, or even meaningfully speak to, the claims of those who argue that there is a major cohort change with iGen students on free speech (e.g. Haidt, Lukanioff, Twenge, Campbell, Manning, Stevens).

Yet Yglesias drew a very strong conclusion (“Everything we think about the ‘free speech crisis’ is wrong”) on the basis of this very weak data.

For Beauchamp, I argued that he misrepresented data from all the sources he cited in a recent Vox report. I focused on two sources, which occupied the bulk of his essay: the Free Speech Project database, and a database on faculty firings by Acadia University political scientist Jeffrey Sachs.

In his attempted rebuttal, “The myth of a campus free speech crisis,” Beauchamp flagged that there were actually three sources he ostensibly relied upon: in addition to the Free Speech Project and Sachs’ database on faculty firings, he also cited FIRE’s disinvitation database.

Fair enough.

But ironically, this correction only makes Beauchamp’s problem worse. Allow me to briefly walk through the three sources Beauchamp cited, why his description of their findings was problematic, and why his “rebuttal” fails to resolve any of my core criticisms (click to expand):

Free Speech Project (FSP)

To review, my core criticisms of Beauchamp viz. the FSP data:

  1. The FSP data is so preliminary and incomplete that it cannot yet effectively speak to the overall prevalence of these incidents – which is what Beauchamp’s story was about. And so, while the FSP project is fantastic on its own merits, it is inappropriate to try to use this data for the kind of case Beauchamp was trying to make. Dr. Ungar’s essay said absolutely nothing about overall prevalence.
  2. Beauchamp’s portrayal of the “free speech crisis” on the basis of the FSP data seemed to be far out of touch with Dr. Ungar’s own view on the matter on the basis of this same data. This divergence, I argued, was a product of the issues in criticism 1: Dr. Ungar came to a different conclusion, not because of some major ideological difference — but because he had a better understanding of his data (and its limits) than Beauchamp seemed to.

In the attempted Vox rebuttal, the second criticism is validated: It is acknowledged that Dr. Ungar sees a more serious problem than one might have gathered from Beauchamp’s initial essay.

“[Dr. Ungar] is certainly not as much of a skeptic about the free speech ‘crisis’ as I am — he believes that there is a real problem, particularly for university administrators who are terrified of a high-profile incident happening at their campus, and that there is ‘evidence’ that speech is ‘being suppressed’ in certain instances.”

With this established, we can occupy ourselves primarily with point 1. But first, it turns out Beauchamp actually made another error in his FSP discussion, which I originally missed, but is relevant here. The original opening language of the essay ran:

Entire books and online magazines are premised on the idea that political correctness is sweeping the American university, threatening both higher education and the broader right to free speech. But a brand new data analysis from Georgetown University’s Free Speech Project suggests that this ‘crisis’ is more than a little overblown.”

Extending basic intellectual charity to Beauchamp, I did not really scrutinize the metadata on the original FSP Medium post to make sure that it really was a “brand new analysis.” Turns out, it wasn’t. The FSP essay that Beauchamp relied on was actually published in March 2018, nearly five months before Beauchamp published his piece (and I wrote my initial rejoinder).

In other words, Beauchamp was not relying on current data from the FSP at the time he composed his essay – and he did not disclose this fact in his essay. This raises a couple of possibilities, both unsettling:

  1. Beauchamp was not aware that he was using out-of-date information because he neglected to look at the publication date on the FSP post before burning off an essay about it (suggested by the “brand new analysis” verbiage) – and also neglected to go to the actual FSP website to explore the data prior to publication (which would have given him current numbers). This would underscore my point about failing to exercise due diligence.
    OR
  2. Beauchamp was aware that Dr. Ungar’s essay was published in March, but nonetheless framed Dr. Ungar’s analysis “brand new” and also declined to report the current data in his own essay – either out of negligence (i.e. he didn’t want to bother with basic research), or because the more current information was less convenient for the point he was arguing. Neither is a good look. And then there is a secondary problem if Beauchamp was aware that Dr. Ungar’s information was actually from March (and hence, likely out of date): why did he fail to disclose this fact – and instead present the analysis as “brand new”?

Beauchamp recently issued a correction acknowledging that the “brand new” analysis he described was actually from March 2018. Yet, in his attempted rebuttal, he makes no mention of this error – instead insisting that his original presentation of the data was completely accurate!

Before we dive into that, notice that both of my core critiques of Beauchamp viz. the FSP have already been vindicated: Beauchamp has issued a correction indicating one way he misrepresented the FSP data. He has also conceded that his presentation of the (non) threat — on the basis of FSP data — was out of step with Dr. Ungar’s own position that there is a serious problem.

Now let’s drill down a little more: Does this new identified error by Beauchamp relate to the criticisms I offered in the initial essay? Yes. Here’s how:

I did work through the actual FSP database in formulating my initial essay, and found that there were more than 90 recorded incidents from campuses at the time, out of a total 137 incidents overall.

If Dr. Ungar’s analysis was “brand new,” yet only focused on 90 incidents (60 from campuses) – this would mean he was only working with a subset of the total incidents in his database — itself, just a small subset of a much larger pool of incidents “in the world.” This was a fair reading, because Dr. Ungar himself refers to the set of incidents he was referring to as a “sample” which was not necessarily representative, etc.

Hence, I assumed that Beauchamp’s error was failing to understand that Dr. Ungar cited just a sample out of the total 137 incidents available in the FSP database as of August 2018 (when Vox covered Ungar’s “brand new” analysis… from March). And while Beauchamp did fail to grasp the nature of the FSP data (or he would never have used it to make bold claims about the overall prevalence of incidents) – in addition to this, he was also working with information that was several months out-of-date, and (intentionally or not) misled his readers (myself included) on this point. This is apparently why he was insisted there were 60 incidents in the FSP data, instead of the 90 he could have easily retrieved through basic research prior to his article’s publication.

Now, it is striking that the number of campus incidents in the FSP database grew from sixty to ninety just between Dr. Ungar’s original Medium post and Beauchamp’s original Vox essay –this amounts to a 50% increase. And as Beauchamp himself noted in his attempted rebuttal, the number has grown further still since my rejoinder (published just a couple weeks ago). I explicitly predicted this would happen, and it underscores the problem with trying to use Dr. Ungar’s data to speak to the overall prevalence of these incidents:

For the sake of argument, let’s run with Beauchamp’s “logic” that the number of incidents in the FSP database actually is reflective of the total incidents nationwide. Well, if there were “roughly only 60 incidents in the last two years” as of March… and by mid-August there were suddenly 90 incidents – then it seems as though the total number of campus incidents nationwide over the last two years actually increased by 50%, just between March and August. This would truly be astonishing, and a cause for concern – especially given that classes were not even in session for most of this period!

By the end of 2018 it is likely that there will be well over a hundred campus incidents in the FSP database from the last two years. Of course, this would not mean that the actual prevalence of incidents increased by nearly 100% (or more) since Dr. Ungar published his original essay. Why not? Because neither the cases Dr. Ungar analyzed in March, nor the number of events FSP will ultimately highlight by December, meaningfully speak to the general prevalence of these incidents at all.

In an attempt to defend himself against my critique, Beauchamp apparently reached out to Dr. Ungar himself. Vox readers were only provided with basically two statements out of what was presumably a conversation of at least several minutes – and even one of these statements was partially redacted through ellipses. Yet, despite the fact that the purpose of this essay (and its predecessor) is to highlight misrepresentations by Beauchamp regarding Dr. Ungar’s Medium essay, let’s just take it on faith that Beauchamp is not conveniently neglecting to share statements from Dr. Ungar which undermine his argument, and that he is faithfully representing the little bit of content from Dr. Ungar that actually made it to the page (redactions notwithstanding).

Beauchamp says he asked directly whether it was appropriate to try comparing the total number of incidents by the total number of universities (apparently bracketing the fact that the data he was relying on was non-exhaustive, non-representative, and out-of-date at time of the Vox publication). Dr. Ungar’s indirect, hedged and polite response was, “I’m not sure I would say you were wrong...”

Now, we can’t know everything Dr. Ungar did say in their conversation, given how little Beauchamp actually included – but we can certainly note some things he apparently did not say:

First, let’s hammer home that he did not say there were “roughly only 60 incidents” between 2016 and March 2018. Hence, Beauchamp’s claim to this effect is simply incorrect.

vox news

There were 60 incidents in the FSP database as of March 2018. This is emphatically not the same as saying there were actually “only roughly 60 incidents in the last two years.” A correction is warranted here from Vox. A more accurate version of the relevant sentence could read:

“The fact that the FSP database only showed around 60 incidents (as of March 2018) suggests that free speech crises may be somewhat rare events that don’t define…”

But would the FSP data even suggest this, really? To answer that question, let’s continue exploring statements Dr. Ungar apparently declined to make:

He did not say his original Medium post indicated anything at all about the overall prevalence of incidents; he did not say that it actually was appropriate to draw inferences about overall prevalence from the FSP data, nor did he say al-Gharbi was wrong with regards to the specific criticisms leveled at Beauchamp viz. the FSP. He basically dodged Beauchamp’s question and then shifted to express “delight” that the Free Speech Project received coverage in Vox.

That sort of response speaks for itself… and it does not send the message Beauchamp seems to hope.

Indeed, despite Dr. Ungar’s diplomatic evasion, the Free Speech Project website is quite explicit that the sort of maneuver Beauchamp attempted in his first essay was inappropriate given how preliminary their data are (emphasis mine):

“…the Tracker is a work in progress and should not be considered a complete listing of every instance in which freedom of speech was tested, challenged, or commented upon… As it grows in size and content, it should become a steadily more useful tool for analysis.”

And so, had Beauchamp actually consulted the FSP website when drafting his essay, not only could he have used current data in his original story, as I did in my rejoinder (rather than content from five months prior) — he could have also avoided making inferences about overall incident prevalence on the basis of this data (such as, “there were only roughly 60 incidents in the last two years”), which the FSP explicitly recommends against.

Sachs’ Database on Faculty Firings

Beauchamp represented Sachs’ findings on faculty firings as follows:

Sachs’ data do not show this. He himself flagged that his data was misrepresented, taking to social media to share and praise my essay about Vox.

In fact, not only did Sachs validate my concerns about Beauchamp’s essay, he went on to vindicate my critique of the GSS, against Yglesias’ protests, as well:

Sachs and I are not too far apart on most of these issues. Hence it is perplexing, in the attempted rebuttal, when Beauchamp makes statements like:

Again, I did not claim there is such a crisis – indeed, in the very essay that Beauchamp is responding to, I explicitly said the evidence of a normative shift is not decisive, and that the “crisis” framing is unhelpful and ill-defined. So it isn’t clear how Sachs is “at odds” with me here.

Moreover, I am in full agreement with Sachs that one should avoid making sweeping claims on the basis of his firings database, given that firings are relatively rare (especially relative to other forms of speech sanction). It was Beauchamp who tried to make strong claims about relative likelihoods viz. firings (“left-wing professors were more likely to be dismissed,” emphasis his) – I merely pointed out that he misrepresented Sachs’ data, and that when better contextualized, his claim that liberal professors are “more likely” to be fired for political speech than conservatives is unsupported. In fact, the opposite seems to be true.

Again, it was Sachs himself, in his Niskanen essay, who noted that given the base-rates of conservative to liberal professors – it may be the case that conservatives are more likely to be fired, despite the fact that most who are terminated are liberals:

“…the professoriate leans significantly to the left as well, so we should expect left-leaning speech to make up the bulk of terminations. As with the skewed findings of FIRE’s Disinvitation Database, we are not talking about a population where political ideology is uniformly distributed. It is possible for liberals to constitute the majority of faculty terminations and also for conservatives to be terminated at an equal or higher rate.” (Sachs’ own emphasis)

I merely demonstrated Sachs’ own point with the available data on faculty base rates… so, again, it is not clear to me how Sachs and I are supposedly “at odds.”

But it is clear that Beauchamp misrepresented Sachs’ data. In fact, following Sachs’ confirmation that his data was misrepresented, Beauchamp had to issue a correction for the second source I focused on in my rejoinder as well.

Let’s recap then. In my initial essay I claimed that Beauchamp misrepresented data from the FSP and Sachs. Beauchamp has actually issued corrections for his treatment of both of those studies:

Given this reality, it’s not clear exactly what Beauchamp takes himself to be “proving” in his attempted rebuttal. Yet because Beauchamp seems to put a lot of weight in the fact that he actually analyzed three sources, let’s consider the third as well:

Foundation for Individual Rights in Education (FIRE)

One of the things I criticized Beauchamp for in my original essay was failing to make full use of the very sources he cited when he was trying to understand the scope of the problem. For instance, he mentioned FSP but failed to rely on their full current dataset when composing his essay (setting aside the inferential issues regarding prevalence). He cited Heterodox Academy, misrepresenting our structure and mission, and then failed to incorporate the data from our Guide to Colleges into his picture on the prevalence of campus incidents. These are criticisms which Beauchamp has not contested: He did not use contemporary data from the FSP (despite describing the March 2018 data as “brand new”… in August). He did not use any data from HxA.

In line with this trend, here is Beauchamp’s summary of FIRE’s data in his original essay:


Based on Beauchamp’s summary, one would be forgiven for thinking that FIRE also believed the problem is pretty small. But, as with Dr. Ungar’s essay, I would urge readers to follow the link Beauchamp provided, and read the actual essay he references, because once again, the narrative he wants to spin on the basis of this data would be greatly undermined.

Moreover, as FIRE’s Robert Shibley wrote in response to Beauchamp, disinvitations are actually one of the least significant ways of measuring the “free speech” problem at universities. This is especially true given that, in Beauchamp’s attempted rebuttal, he claims that the “subject” of his original essay was allegedly “speech rights of faculty and students.”

If “speech rights of faculty and students” was the subject of his piece, it is curious that he literally only mentions the “rights” twice in the entire essay (excluding listing the full name for FIRE) – once when mischaracterizing the mission of HxA, and once when describing the mission of FSP. All other times he uses “right” in the essay, he is referring right-wing political partisans. In the rejoinder, he also evokes “rights” only twice (excluding FIRE’s name) – and one of these was the sentence where he insists “speech rights of faculty and students” were the “subject” of his essays all along!

This is a poorly-substantiated retcon attempt. However, even if allowed to stand, it actually does little to improve his case. In fact, it makes his neglect of FIRE’s resources even more glaring. If Beauchamp was primarily concerned with understanding rights and whether or not they were protected or threatened, then it is not clear why he was analyzing disinvitations (many of them at private institutions, where there is no “right” to the campus… and technically, no “right” to free speech). He should have instead been looking at university policies and relevant court cases, especially if he was already committed to analyzing data from FIRE (which works primarily on policy and legal issues).

In terms of lawsuits, etc. — as I mentioned in my initial essay — FIRE publicly lists hundreds of legal cases they were/ are engaged in. These were completely absent from Beauchamp’s calculations.

In terms of policies, FIRE collects information on speech codes — and confirmed incidents of censorship or violations of due process — for more than 450 universities nationwide. Each of these schools are assigned a FIRE rating based on their official policies regarding academic freedom and due process. If Beauchamp was primarily concerned about “rights,” it seems as though this dataset is where he should have put his main emphasis.

But the story these data tell are pretty inconvenient for his thesis: nearly a third of the institutions in the set have a “red” rating – indicating policies that significantly threaten freedom of expression or due process. Meanwhile, only about 10% have a “green” rating, indicating strong, consistent support for academic freedom and due process.

In other words, if Beauchamp wants to retroactively portray his story as being about “rights,” then the data he chose to highlight in his essays were even less appropriate than they would be if he was speaking about prevalence. And that’s saying something – because literally none of the datasets he relied on could effectively address the overall prevalence of campus incidents. Moreover, if he was primarily analyzing “speech rights of faculty and students” the FIRE resources would pose an even larger problem for this thesis than they already do.

But let’s be clear: his original essay was not about “rights” — it was about how common free speech incidents are on campus. Indeed, the word “incident” was used more than 10 times in the original essay and 24 in the rebuttal attempt (as compared to two instances of “rights” in each).

Beauchamp also ignored useful resources from FIRE regarding the question he was actually trying to explore (again, “How common are these campus incidents?”): beyond the disinvitation database (which he mentioned) and their publicly-listed court cases (which he neglected), FIRE also has a searchable database containing incidents of attempted censorship or due-process violations at the 450 schools they track. This dataset makes no pretense towards being exhaustive. Nonetheless, it contains hundreds of incidents.

It is very telling that there are this many incidents from just 450 universities, out of “4583 colleges and universities in the United States (including two and four year institutions).” If we are already at hundreds from FIRE’s data alone, based on just 1/10 U.S. schools, consider what the number would likely be if FIRE could collect comparable data on the remaining 4000 or so institutions of higher learning. My bet is we would end up at more than “dozens.”

In short, rather than helping Beauchamp in any way, including FIRE among his list of sources only reinforces concerns that Beauchamp is underutilizing the very data sources he’s citing.

The FIRE example also provides yet another illustration of Beauchamp presenting a picture of the world that is out of step with that of his sources. FIRE strongly disputes Beauchamp’s portrayal of the free speech situation on campus. Beauchamp obliquely acknowledged this in his attempted rebuttal, but failed to mention or address that FIRE also published a rejoinder to him, which I linked to in my original essay, and have shared again above.

Again, had Beauchamp reached out to FIRE President Greg Lukianoff, as he eventually did with Sachs and Ungar, he would have been informed (as I was informed) that even the hundreds of incidents Beauchamp could have easily culled from FIRE’s publicly-available resources are just scratching the surface – they actually get about 1k direct requests for help each year, only a fraction of which they can ultimately pursue in court, etc. (due to constraints in resources, manpower, and the like).

In short: Beauchamp is correct to insist that he did briefly mention FIRE in his original essay, despite my claim to the contrary. However, this is of little use for him, because his treatment of the data from FIRE was just as problematic as that from Sachs and the FSP.

More Essays, More Problems

As the preceding sections showed, Beauchamp’s attempted rebuttal did not dislodge any of the core criticisms I offered of his piece. If anything, he dug the hole deeper by adding FIRE into the mix as yet another institution whose data he misrepresented and underutilized (rather than simply ignoring, as I initially suggested).

But there still are a few more issues we have to flag. First, a problem I alluded to in my original essay, but which needs to be rendered more explicit because Beauchamp doubled-down on the error in his attempted rebuttal: he repeatedly claims that the data from Sachs, the FSP and FIRE all seemed to tell “the same story” — “dozens” of incidents. This is a basic statistical error.

In fact, all of these datasets were speaking to different phenomena — disinvitations v. faculty firings v. campus protests etc. – meaning they each tell a different story. Given that the cases in each of the sets are generally non-redundant, they would actually need to be combined (i.e. added) together in order to get on the same page, to actually tell the “same story” about campus incidents in general. So let’s do that:

Between 2016 and the time Beauchamp wrote his essay, the FSP database had come to include 90 campus incidents. There were also 43 incidents from Sachs’ database on faculty firings within this period, and 88 disinvitations from the FIRE dataset. Summing them up we can see that, just from the narrow range of data Beauchamp himself cited, a more accurate description is that there were “hundreds” of free speech incidents, not “dozens,” in the last two years. Had Beauchamp incorporated the full range of publicly-available data from FIRE and HxA, both sources he cited in his original essay, the number would be much higher still.

And although we actually don’t have to speculate about all the incidents that HxA, FSP, FIRE and Sachs fail to capture in order to clearly see that Beauchamp underrepresented campus incidents in his essay(s) – it is worth noting again that, even collectively, these datasets are nowhere near comprehensive in capturing incidents of suppression of speech or ideas. As I flagged in my initial rejoinder:

“All of us are only looking at situations that involve terminations, make the news, or end up in court… We are not able – even collectively – to capture all publicly-available incidents. We will never be able to capture other, likely far more prevalent, incidents of suppression of speech or ideas that do not end up in major media outlets, in courtrooms, etc. As a result, the default assumption should be that the problem is likely worse than the available data suggest (maybe not by much… but also, maybe by a lot).”

II.

In addition to doubling down on previous mistakes, Beauchamp concludes his essay by committing a few new errors that are worth noting: First, he again attributes to me a belief that there is a free speech crisis, despite my repeated assertions — in the very essay he is responding to — that I do not actually hold that view. For instance:

“I do not have any particular investment in whether or not there is a free speech ‘crisis’ (I try to avoid this kind of language myself, as a rule). Nor do I have any stake in whether or not the contemporary cohort of young people (iGen) are profoundly different from previous generations. My only aim here was to debunk low-quality research / analysis on these topics, not to argue for or against any particular position on them.”

I’m not sure how I could have made my own position any clearer. Yet Beauchamp nonetheless structured his entire “rebuttal” as an attempted debunking of a belief that I explicitly do not hold (even calling his attempted rebuttal the “Myth of the Campus Free Speech Crisis” as though that is any kind of refutation of my own position). See: Straw-man fallacy.

Second, he runs together the issues of an alleged “free speech crisis” and “liberal bias” in academia — in an attempt to dismiss both.

Example:

“So why does this all matter? It matters because claims of a campus free speech crisis (al-Gharbi’s piece included) unintentionally bolster a right-wing narrative that the campus is a haven of out-of-control liberalism — and that something dramatic needs to be done to address that. In a vacuum, the notion of promoting ‘viewpoint diversity’ is laudable. But we aren’t operating in a vacuum: We’re operating in a world where Republican legislators are using allegations of a campus free speech crisis and liberal bias among the academy to further efforts to crack down on individual freedom.”

We should definitely separate the issue of the “free speech crisis” from the “liberal bias” in academia. Here’s why:

As I have repeatedly stated in both this essay and the previous one, it is a live debate whether or not there are significant changes underway in terms of how young people view speech — and whether those changes would constitute a “crisis” even if established. Skepticism here is perfectly fine (notwithstanding Beauchamp and Yglesias’ failed attempts at “proving” there is no crisis).

However, the evidence of deep political bias in social research fields is far clearer: It affects how social problems are defined and studied. It affects how committees of professors make decisions in peer review, hiring, promotion, grad school admissions and beyond. It affects which materials are assigned to the curriculum and how they are engaged. It affects the opinions of religious and conservative students about whether the academy has a place for them, or whether they would be better suited elsewhere. It affects how policymakers and the public evaluate the credibility or utility of social research. And it affects virtually all of these things in a negative way.

Again, this is the problem Heterodox Academy was created to address: homogeneity and insularity within the humanities and social sciences. It is a serious problem which undermines the quality and impact of research and pedagogy. And it is not just a problem for academics: to the extent that good social research is distrusted, or biased and unreliable social research is utilized, this has negative downstream consequences for the populations scholars study and often wish to empower or assist (typically those of low socio-economic status, or those from historically marginalized or disenfranchised groups).

Conclusion: “Heterodox Academy, and why this debate matters at all”

Let me conclude by returning to the theme I led with: in this highly-polarized political moment, it is generally assumed that if someone is pushing back against a popular left-leaning narrative, or espousing an inconvenient view for the left, then they are de facto aligned with the right, intentionally or not. Beauchamp’s rebuttal attempt provides a great example of this fundamentalist thinking: highlighting systemic political bias or threats to free speech on campus will help the right – regardless of one’s intentions –and so, apparently, we should not talk about these issues (except, perhaps, to deny they are a big deal).

I am deeply familiar with this “logic”: as a Muslim scholar who, until recently, worked exclusively on national security and foreign policy issues, it was regularly *suggested* to me that criticism of the “War on Terror” – especially by “people like me” — provided cover or ammunition for al-Qaeda, ISIS and their sympathizers. In the view of these critics (mostly on the right), I was aiding and abetting “the enemy,” intentionally or not.

There was even an article published in the National Security Law Journal which argued that I, and academics like me (by which the author seemed to mean: Muslim, left-leaning, and politically “radical”) should be viewed as enemies of the state — and could legitimately be targeted by national security and law enforcement agencies. This article was eventually retracted, and its author forced to resign from his position at West Point (as described in the Washington Post here). But suffice it to say, I *get* the kind of narrative Beauchamp is trying to spin here, and I reject it whole-cloth.

I challenge U.S. national security and foreign policy precisely to render it more effective, efficient and beneficent – because I actually have “skin in the game” with regards to how the military is deployed. I relentlessly criticize bad research on Trump and his supporters because it is important for the opposition to be clear-eyed and level-headed about why he won – to help ensure it does not happen again. A similar type of motivation undergirds my critique of Beauchamp and Yglesias:

It does not help the left or academics to respond to distortions and exaggerations on the right by denying that there is any significant problem. It is especially damaging for “wonks” or academics to dress up these kinds of political narratives (essentially, propaganda) as social research – even more so if this “research” suffers from glaring errors or shortcomings like the essays criticized here.

Such a strategy is self-defeating because it is the left, those in humanities and social sciences, those from historically marginalized and disenfranchised groups, and those who seek to give voice to these perspectives or to help these populations, who stand to lose the most if the credibility of social research is further eroded due to perceived partisanship.

One brief example from an essay Jonathan Haidt and I wrote for The Atlantic:

  • Most of the major “free speech” blowups have happened at elite private schools (or “public Ivies” like Berkeley) – which are disproportionately attended by upper-income and white students, and disproportionately staffed by faculty who are white and male.
  • Yet, which schools are paying the cost for public dissatisfaction about the state of higher ed (driven in large part by these incidents at elite, private institutions)? Public land-grant schools like University of Arizona (my alma mater): the very schools that are most likely to educate lower-income and minority students, and the very schools that are most likely to have tenured or tenure-track professors that are women and minorities.
  • Within these schools, which programs are first on the chopping block? Humanities and social sciences – the very fields in which women, blacks and Hispanics are most likely to hold professorships, and in which students of color and women are among the most likely to enroll.

Hence, what Heterodox Academy is trying to achieve isn’t something “laudable” in the abstract, for those “operating in a vacuum” (as Beauchamp describes). The reverse is true: some people have the luxury of ignoring or denying the problem because they are not directly grappling with the fallout. Beauchamp graduated from Brown and the LSE (Yglesias, from Harvard). For lack of a better way to say this: It shows… and not in the ways one might hope.

Yet, if concerned about social justice, it is absolutely essential for those who are part of elite institutions (including those at my current home, Columbia) to understand these dynamics, and to be cognizant of the way their actions can have ramifications for less privileged students and faculty, especially those at less insulated (i.e. virtually all other) colleges and universities.

This is a tough pill to swallow. I get why many on the left, especially at elite universities and media outlets, would rather just say “nothing to see here,” than to confront these realities. But it will not do, for all of us to simply close ranks and insist “there is no problem, we will make no changes.” Because there is a problem — and change is coming to institutions of higher learning, one way or another.

At Heterodox Academy it has always been our hope and expectation that when professors and administrators come to understand the seriousness of the challenge we face, they will rise to the occasion — out of their own commitment to truth and rigor (or self-preservation!) — and correct course while we still have choices regarding how our institutions and practices are best reformed. In order to facilitate these efforts, HxA produces and consolidates research, tools and strategies to help university stakeholders understand and address the lack of viewpoint diversity, mutual understanding and constructive disagreement within institutions of higher learning. Soon, we hope to foster networks within and between research fields (or institutional roles) to further accelerate the reform process.

However, we do all this with an acute awareness that if we fail in our mission — if social researchers cannot restore sufficient faith in our work, and in our academic institutions — then we are likely to see continued declines in enrollments, and even more ham-fisted and harmful legislation of the sort Beauchamp highlighted. In fact, a segment from the previous essay provides a fine note to close out this whole discussion:

“[Dr. Ungar] also posits, regarding the myriad laws being passed to help “fix” institutions of higher learning – often these “cures” seem to be worse than the “disease” they are trying to address. Many in Heterodox Academy share these sentiments. In fact, HxA Research Associate and NYU Law student Nick Philips has argued both of these points in recent publications: campus conservatives must check their own trolling; attempting to legislate away the socio-political tensions within universities is probably a bad idea. FYI: Nick is a conservative. These do not have to (and should not) be partisan issues.”

No doubt, there is an active and cynical campaign by some on the right to reduce the complicated challenges facing universities into wedge political issues. But here’s the thing, we actually don’t have to oblige them (by reflexively adopting a simplistic position, diametrically opposed to theirs). We can steer this in a different direction. And I hope that’s what we do.

Published 9/7/2018 by Heterodox Academy


Related