Why misinformation and disinformation on the Left is harder to operationalize measurements for.

Kester Ratcliff
4 min readMar 28, 2023

--

This was stimulated by Rathje, S., Roozenbeek, J., Van Bavel, J.J. et al. Accuracy and social motivations shape judgements of (mis)information. Nat Hum Behav (2023). https://doi.org/10.1038/s41562-023-01540-w

A first confession — I haven’t read the paper’s Methods section yet, so I don’t know if these general doubts about how ‘misinformation’ is identified in studies about it and how that might unintentionally bias the common apparent result that liberals (or leftists) are less prone to believing it, are relevant to this specific paper. Still, I think they’re valid concerns generally.

I saw Steve’s reel on Instagram summarizing the conclusions of his paper:

Steve Rathje on Instagram: “New paper out today in Nature Human Behaviour #psychology #psychologyresearch #misinformation #fakenews”

Many studies on this topic have come to the conclusion that liberals are less prone to mis/dis-information. It may indeed be true to some extent, but I suspect they (unintentionally) exaggerate the effect size because of these methodological problems:

  1. Misinformation for liberals is more contextually complex, so
  2. It requires more contextual knowledge to distinguish it, and
  3. More mixed in with truth, or the ratio of falsities to facts is lower,
  4. False causal or moral attribution claims are made more implicitly, &
  5. It has more significant omissions as a form of misinformation.

So it’s harder to spot or to clearly distinguish and categorize.

Liberal values are more universalist, which is why we engage with more international news and long-running feature issues, but that means that the contextual knowledge needed to spot misinformation in our information diets is more complex and varied, and just more. The misinformation elements of news stories for liberals are also, I think, usually a smaller proportion of the total fact claims in articles/ pieces.

Example of a study used as a basis for identifying misinformation: Zimdars et al. 2016, relied on in Nikolov, Flammini, and Menczer (2021), link to my own previous discussion of both, with the original references there too.

I’ve done some of this sort of disinformation sources network mapping, content analysis and categorizing sources as mis/dis-information or not, like Zimdars et al. (2016). The work I participated in was partly published in the Washington Post (PM me for a link), but not in a peer-reviewed academic journal. I don’t object to this sort of method in general, but I know from doing some of it that it’s more difficult to be certain about the distinction between mis/disinformation vs. not with content made for liberals.

Too many people, mostly not specialists, have an unrealistically simple, binary approach to categorizing news content as true or false. Most people seem to assume it’s simply either based on events which really happened, so true, or totally not based on real events, so false. That’s also implied in the unfortunately popular term ‘fake news’. If only it were so simple!

Goebbels’s in his Principles of Propaganda wrote that propaganda must not exceed the ratio 60% true to 40% lies, otherwise the propaganda system in question’s grey and black propaganda outlets (i.e. semi-covert and pretending to be independent and fully covert and pretending to be from the opposite side), which may have been cultivated and invested in for a long time, will lose their perceived credibility and become useless, so those assets will be lost; and, in general: the perceived credibility of content, and of the outlets used to distribute it, goes up as the ratio of lies to facts goes down.

Especially more so with liberal mis/disinformation content, if the first endoxon referenced at the start of the piece of content is true or generally believed to be true. E.g. reference to the second Iraq war having been a disaster (usually no clarification about how or why, because the purpose of the reference is almost always to use it to rationalize an absolute, invariable anti-interventionism, in favour of international impunity for non-Western-allied dictatorships and their mass atrocities), then whatever follows is usually accepted as true, even if it has no real causal or logical connection.

I also noticed more in liberal/ left-wing semi- mis or dis-information, especially more often that by Elder Tankie opinion “journalists”, the authors hide their main causal and moral responsibility attribution claims in the adverbs describing contiguous events or in the linguistically conjured up metaphorical mental images connecting paragraphs, rather than directly stating them and providing some explicit evidential basis for those claims. This alss makes it harder to clearly identify and externally justify that those claims are false, because they don’t explicitly state them, only imply them.

These four or five factors make it harder to identify and measure mis- or dis-information on the liberal left reliably, especially when the method has to be simplified enough to be carried out mostly by student assistants. I think many methods omit these difficulties, for internal reliability, but doing so compromises external reliability, and it could possibly contribute to the apparent result that liberals are less prone to misinformation.

Personally my hunch is that even after controlling for these hypothesized methodological artefact effects there would still be a real effect that liberals and leftists are relatively less prone to believing in misinformation, but studies need to control more clearly for these methodological difficulties before reliably concluding how much it is so.

--

--

Kester Ratcliff

Lapsed biologist retraining as a social data scientist, often writing about refugee rights advocacy and political philosophy.