How has the structure of the Public Sphere changed since Facebook?

Current internet architecture, Enlightenment and Consumerist myths about human cognition, epistemic bubbles vs. echo chambers, and their leaders’ collective narcissistic delusions.

Kester Ratcliff
16 min readAug 10, 2018

It is tempting but probably untrue to attribute malice to most people who believe narratives about Syria which are objectifying and produce evil. Most people are sincerely misled ‘useful idiots’ rather than deliberately evil agents. Instead, I present my theory of how people came to believe so much bullshit:

Humans do not actually make collective decisions and actions (i.e. ‘politics’) by individually rational processes most of the time, despite Enlightenment myths theorising otherwise. Most of our decisions are made by simplifying and approximating decision-making mechanisms (‘heuristics’) based more on social cues and our historical knowledge about sources than on the particular new information content from those sources. Such social cognitive heuristics are adaptive in a matching environment, but they become systematic biases when the information environment is structurally mismatching.

Social cognitive heuristics enable us to make mostly, on average, more accurate and efficient collective decisions to survive and reproduce than we could do by individual cognitive processes alone. Heuristics, in themselves, are less accurate than rational processes but much more efficient, and, given the complexity of our specific niche, we could not live without them.

I think it’s a mistake to think of social cognitive heuristics as inferior to individually rational processes--both social and individual cognitive processes are necessary, and they work best when they’re well integrated. I think the notion that individual rational processes are really separable from social heuristics and that they are superior is a fantasy imagined to give people with more free time and attention available to spend on individually rational processes a pretext for their preexisting sense of social superiority.

When the socially constructed information environment — particularly the data structures and algorithms of current social media, including search engines, are structurally mismatched to the environment in which our social cognitive heuristics evolved and are still adapted to, and they are structurally mismatched to the mechanisms behind what new particular information is selected and ranked to present to us so that we don’t understand the environment we’re interacting in, then we tend to make inaccurate decisions more often, and our collective decision making processes become so costly and inefficient that more people withdraw or reduce their participation and investment in the collaborative effort of deciding responsibly as members of (relatively) democratic societies. I think these are the most basic causes for our currently declining state of functional democracy.

Social cognitive heuristics, especially the filtering and weighting of new input information, and individually rational cognitive processes are not really separate, like inductive and deductive reasoning cannot really be separated. The input information for individually rational processes almost always comes via complex social chains of trust (very little of what we individually rationally process we have seen or heard with our own eyes or ears), which are also dependent on social epistemic norms and institutions. The people who we recognise as ‘most rational’ in our societies are not people who just do individual rational processing well, but really more people who have learned good social epistemic behaviours and mostly play by the rules of the institutions that maintain cooperative specialisation of epistemic labour.

The current design of social media does not give us reliable information about really who is telling us what, who they’re associated with and what they’ve said before. Using that fact, the Kremlin propaganda strategy about Syria has proliferated sources and disguised their real associations, so that people think they are seeing claims being independently corroborated, but it is actually just covertly associated repetition from the same propaganda sources.

(*Kate Starbird and her research group call a closely related phenomenon ‘false triangulation’ — the difference is that ‘false triangulation’ of sources refers to the encouragement to “do your own research” and “question more” whilst having set a trap of a sub-network of covertly associated sources and then narratively persuaded the audience to distrust all outsiders’ information, whereas my term ‘pseudo-independent corroboration’ term refers to the covertly associated network of ‘independent’ and ‘alternative’ news sources.)

The structural mismatches between the environment our social cognitive heuristics are adapted to and our new social media environment are based on two fundamental and unrealistic modelling assumptions, which are cultural: 1) the Enlightenment myth that humans can or should be individually rational all or more of the time; and 2) that ‘truth’ is or can be modelled as a private consumer good, and it will be increased if only the ‘market of ideas’ is free from external constraints and all but the most minimal limiting interventions, implicitly assuming that the criterion of ‘truth’ is how much truth claims accord with our individual subjective consumer preferences.

The former myth in practice demands impossible levels of time and attention from citizens as individuals, while rendering civil society’s networks of cooperative specialisation of knowledge and trust between specialists and the general public to be invisible or accidental to the system, which in practice means fewer people can afford the time and attention to participate directly and that further alienates democratic decision making to authorities, and then the latter myth seems to relieve the burden of democratic responsibility even more by reducing ‘truth’ to a matter of individual consumer preferences, which also enables the populist strategy of winning power by discrediting authoritative institutions and specialists, and claiming the trust that they or their predecessors created before, without really earning it.

The twin myths of individual rationality and the consumerist social imaginary are built into the structure of our social environment more than ever before. People unconsciously adapt their own mental frameworks to fit in with the implicit assumptions and norms in their social environment, including those engineered into the structure and mechanisms of the new social media environment, as if they were natural givens. This mode of enculturation of a social imaginary is almost certainly much more efficient than communication.

Image from Wikipedia https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms

We’re being presented with information in a structure as if we were ants, but we’re not ants. The structure of the information environment on social media is, in the simplest / most abstract, terms a unidimensional array, like an ant colony’s scent trails. We’re assessing a trail of new information selected and ranked for us depending on marks in our environments (likes, comments, shares) (a stigmergic process, again like ants), rather than interacting directly with each other, with the full human complexity of social and diachronic (across time) information about who is telling us what and how trustworthy they are. Compared to the human ancestral environment in which our current cognitive traits evolved, the structure of the environment could hardly be more different — then, we would have had plentiful social and diachronic information about the sources of new particular information, which are the kinds of data our social cognitive filtering and weighting mechanisms work on, and, after filtering and weighting the reliability of input information, then we might individually rationally process some of it. In the new social media environment, we are deluged with new particular information, but there are no structures that make it effortless to see reliable social and diachronic information about the sources of the new information we’re deluged with. It’s common to say social media overwhelms us with information, but it also doesn’t give us the kinds of information about new information (social metadata) we need for our social cognitive heuristic mechanisms to work accurately and efficiently. It’s like we’re suffering both a flood and a desert at the same time.

So we’re making collective decisions inaccurately and inefficiently, like the ants in my academic supervisor’s experiment, in which 176 colonies of ants in large 20 cm Petri dishes with no prior environmental structure within which to build their network of scent trails and communication system for resource allocation decisions died off much faster than normal, even though they had unconstrained resources and no predators or parasites, because, for both us and the ants in my supervisor’s experiment, the structures of our information environments now don’t match our social cognitive mechanisms. And, like the ants in the experiment lacking prior environmental structure to give order to their communication network of scent trails, we are lacking a structural way to effortlessly see the relevant kinds of social data about the trail of new information in front of us.

Why am I introducing this list in this way? Because I think it’s important to point out how so many people became so much more susceptible to such gross manipulation of their collective decisions and behaviour, and how more people than ever before fell for so much blatant bullshit. Criticising people for making mistakes often results in them denying their mistakes and doubling down harder, and that’s what I’m trying to avoid by explaining in such abstract terms fundamentally how people can become so seriously misled but genuinely without consciously malicious intent.

It’s not typical of human beings to act so cluelessly about social information — we are a hyper-social species, almost eusocial, we normally chatter constantly except when we’re thinking about what we saw or heard about each other. I believe we do have the mental capacity to process the volume and rate of news we’re faced with now, but only if we get the relevant kinds of social and diachronic data about sources which we need to update our mental map of the network of sources and update our trust allocation image across the network of sources that enables our cognitive filters to work accurately and efficiently. If our collective decision making is too inaccurate, we make disastrous decisions, and if it is too inefficient — meaning it demands more time and attention than people can afford to be responsible democratic citizens, it alienates people more, fewer people engage, democracy fails.

Timothy Snyder has recently published very similar conclusions starting from a different perspective. Of course we’re not saying it’s the only factor, but I do think it’s the critical factor which provides the simplest plausible explanation for why our problems have grown so much bigger in the last ten years or so.

This has become for me an in-depth example of a general problem. I want people to understand not in order to blame them and leave them in a state of guilt, but so that we can do better. If we don’t do better, I seriously think it might become our undoing as an international order of democratic societies.

Humanism(s), a belief system(s) based on ideas of universal and inherent human moral dignity and rights, as Levinas (Humanism of the Other, 1964–70) and Maritain (Integral Humanism, 1939), defined it, requires a certain kind of belief in the existence of objective truths as a foundation for its universality. Understanding what is objectively true for the other person is infinitely difficult, we can never perfectly comprehend another person, they are always a mystery, but humanism interprets that fact as implying an inherent, specifically human ethical duty to struggle endlessly to understand and to respond justly to the other, never as a pretext for irresponsibility.

Postmodernist anti-humanists, on the other hand, such as Vladislav Surkov, Aleksander Dugin and Ivan Ilyin, use the transcendental incomprehensibility of the objective truth of the Other as starting material for their working theory of propaganda for the Kremlin regime, in which “everything is possible and nothing is true” (Hannah Arendt’s phrase, reused by Peter Pomerantsev as the title of his 2015 book), so that meaning appears to be totally subjective, as for Sartre and Lyotard.

The relevance of this is that much of the Assadist camp claim that their claims are a “different perspective”, as if there was no objective truth of what actually happened to the victims, and treating ‘truth’ as merely instrumental to their political agenda, picking bits of truth to use as weapons and weaving bits into bigger and more credible seeming lies. Although this is highly abstract, it is not impractical — the practical consequences are life or death for many, and ultimately we are not separate — our imagined communities and borders are social fictions, and the norms we practice on others are inherently systematic and will inevitably reflect back on us, tho the consequences may be delayed.

‘There is no them, only us’, and
‘if you tolerate this, then your children will be next, will be next, will be next.’

Things resembling propaganda often used to obfuscate what it really is

Propaganda is different from news reporting and from honest opinion. I define ‘propaganda’ in the simplest terms as unreasonable means of persuasion. Selection of news considered worth reporting can be intended to persuade people — for example, human rights violations monitoring organisations selectively report about people who have been abused, in order to persuade other people to care and act. That is not unreasonable means of persuasion, and it is not, merely because of that selectiveness, ‘propaganda’. Reporting with a purpose of persuading people, but being fairly open about that purpose, is advocacy, not propaganda.

Unreasonable means of persuasion can also involve the way an argument is framed — hiding imputations of causality and blame in seemingly descriptive terms connecting two facts, rather than straightforwardly making the case for that causal connection and that blame deserves to be located there. Deploying only reasonable means of persuading people I would call ‘honest opinion’, not propaganda.

‘Reasonable persuasion’ implies as a prerequisite attributing a basic level of bona fide trustworthiness to those who may be opposed to what we want to persuade them of. We try to persuade people reasonably when we believe that if we present to them relevant, valid and adequate evidence for what we’re trying to persuade them of, and-or some fairly cogent moral reasons, that they will judge our attempt to persuade them according to some common values and as members of the same community. Reasonableness assumes a basic level of mutual respect in communications. That is why it is normally absent in communications between members of an echo chamber and outsiders.

Propaganda not only attempts to persuade people unreasonably of the particular beliefs in it but also spreads the social behaviour of redistributing trust away from sources that were previously trusted and to new sources, which also means re-allocating attributed political authority to those whom the propaganda was designed by or for. The reallocation of trust and political authority is generally more important than the particular claims in propaganda, which may not even be particularly persuasive in themselves.

Besides disinformation and concealing false moral judgements in framing terminology, there are also manipulative rhetorical techniques — presenting a fact claim which the intended audience will perceive as obviously true, then quickly rhetorically jumping to the conclusion you want them to believe, without any relevant evidence or reasoning for that causal nexus. Or, take two true facts but hide a third, false imputation in the framing terms supposedly describing how they are causally connected. People enculturated into an epistemic bubble around sources who frequently do propagandistic rhetorical sleight of hand techniques usually do not recognise the illogical jumps in narratives presented to them, but it is obvious to outsiders. (I confess a slight glimmer of hope in that I think propagandistic or rhetorical sleight of hand way of writing is less normal in the younger generation of journalists.)

Another rhetorical technique of propaganda is manipulating the anchoring and adjustment heuristic into a bias. Anchoring and adjustment heuristic means that one of the efficient, approximating cognitive mechanisms (‘heuristics’) we use to make most decisions is to judge whether the distance between an anchoring perception of something ‘true’ and the adjustment to a new perception feels credible to us. The ‘anchor’ or element of truth used to create the sense of distance to the new perception or claim may be faulty, but people usually won’t question that when it appears as though the claim is an adjustment to something presented as ‘obviously true’. Rhetorically skilful manipulations of anchoring and adjustment heuristic can be used to make a false claim seem credible or to make a true claim seem incredible.

Epistemic Bubbles vs. Echo Chambers: a taxonomy of social media clusters

C Thi Nguyen’s philosophical anthropology of epistemic bubbles, echo chambers, and (briefly implied) collective narcissistic delusional disorder —

Everyone in the social media environment has an individual filter bubble created by the personalisation algorithms and their previous interactions. Individual filter bubbles aggregate up into clusters, and there are important qualitative differences between some of those clusters. First, simplest type of cluster in the information sharing and decision-making network is what C Thi Nguyen has called an ‘epistemic bubble’ — the name doesn’t specify that it is a collective phenomenon, but it’s clear in his essay that’s what he means.

A person in an epistemic bubble is lacking information from outside their bubble, or, more precisely, the information which reaches them in their collective bubble is a systematically statistically biased sampling of the information in the whole network.

A person in an echo chamber may also have a statistically biased sample of information reaching them, but the difference is that in addition they have a socially cultivated and exaggerated distrust of outsiders and social resistance behaviours against information which contradicts the group’s core beliefs and identity. Aggressive collective reactions against information which threatens the group’s identity narrative is the distinguishing sign of an echo chamber.

It’s possible that some people in a cluster on social media which is mainly an epistemic bubble might do echo chamber type behaviour, but they are unlikely to stay aggregated together or stay in that mixed condition for long.

The references directory is designed to benefit people who are in epistemic bubbles, or in the early and recoverable stages of enculturation into an echo chamber, but this approach cannot possibly reach hardened members of echo chambers, because of their habitual social resistance to outsider information.

Filter bubble’ is a related concept, but: a) a filter bubble is fundamentally an individual phenomenon, whereas Nguyen’s usage of ‘epistemic bubble’ implies a collective aggregate of individual filter bubbles; and b) ‘filter’ describes how a kind of (collective) epistemic bubble forms, due to the personalisation algorithms in social media. Facebook newsfeed now uses a machine learning algorithm which is continuously updating itself, rather than the original EdgeRank algorithm, but it is probably largely similar —

Knowing basically how personalisation algorithms work is essential to understand the information and political environment we live in now; not understanding it is like flying blind. Focus on the affinity scores factor.

Twitter added a personalisation algorithm to its newsfeed much later, after seeing some of the problems caused by excessively aggregating algorithms on other platforms, and you can see the results in the different community structures of Facebook vs. Twitter networks — Twitter’s network has lower modularity, or in other words, the epistemic bubbles are not as much isolated from each other (Yochai Benkler, Robert Faris, Hal Roberts and Ethan Zuckerman, of Harvard Berkman-Klein Center for Internet & Society and MIT Center for Civic Media, ‘Breitbart-led right-wing media ecosystem altered broader media agenda’, in Columbia Journalism Review, 3 March 2017).

I think there is also a third stage, which Thi Nguyen mentions but dismisses as of minor significance, but I doubt it is, when an individual has operantly conditioned themselves into an echo chamber to such an extreme, often as a leader, that the collective narcissism of the group suppresses and replaces their intellectual autonomy so much that they can incredibly blatantly lie, constantly, and without social anxiety or restraint about doing it, because the judgements of outsiders are completely devalued for them and the in-group only lavishes praise on their increasingly extreme statements, for affirming their collective identity, so that they only experience social rewards for it. It is related to performative political lying for demonstrating and claiming power. I guess I would call this third stage a collective narcissistic delusional disorder.

People who are in an echo chamber experience the social performance of blatant, performative lying by their leaders as emotionally rewarding because it affirms their group identity and attacks the credibility of outsiders. Factual debunking then can be counterproductive because echo chamber members tend to react to it in ways which effectively condition themselves further into identification with the group and disassociation from outsiders (Nguyen).

There is also a fourth possibility — a coordinated echo chamber. I think it is possible for an echo chamber to form without any central coordination, and the degree of central coordination in an echo chamber is a continuous variable. When a few nodes in an echo chamber cluster have much higher betweenness centrality than the rest of the nodes, and the typical collective behavioural aggressive response to group identity-threatening information has been observed, then that is probably a coordinated echo chamber. Coordinated-ness could also be measured with information dispersal trajectories — if the information dispersal trajectories in a cluster are consistently highly similar across items and across time, that probably indicates central coordination.

The radical restructuring of the Public Sphere which hardly anyone has noticed let alone consented to

I hope I have sufficiently explained above how the new social media environment constitutes a radical restructuring of the Public Sphere.

An interesting feature of this radical restructuring of social information environment is that probably neither the designers nor the users have consciously recognised the most basic assumptions they were either building into their design or cognitively adapting and enculturating themselves to.

That raises the worrying but also potentially very useful insight that unconscious adaptation to a socially constructed environment, and the assumptions and norms built into the structuring of the environment, can be conditioned much more efficiently than by conscious communication.

Relocating and restructuring the Public Sphere more in the social media networked environment also means that collective decision-making is effectively more evenly distributed, which could be more democratic, but I think we have seen that a more evenly distributed decision-making network by itself does not produce the values-based kind of democracy we really want.

I think the Populist insurgency across predominantly liberal democratic societies is to some extent, approximately to the extent that it is genuinely a grassroots movement, is motivated by a sense that the location and structure of the Public Sphere has shifted, and politicians now are more followers of public opinion than leaders. It is a partial and faulty diagnosis that leads to the wrong treatment, but it contains some sense of what has really changed.

I believe we can adapt to a primarily internet-based Public Sphere, one structured and imagined as a network, and that it can be better, but we need both our cultural institutions and norms about information sharing and decision-making to update to the new social information environment and partly also to readjust our social information environment to fit the kind of political culture(s) we really want. Culture evolves to fit environment, and socially constructed environment is also constantly being adapted to cultures.

This article is intended to start a discussion, not to conclude it.

I’ve been struggling with this question for years, but still feel like I’ve only scratched the surface. Like Solomon said:

“17 I set out to understand wisdom and to understand foolishness and delusion. But I perceived that this also is a chasing after wind.

18 For in much wisdom is much vexation,
and those who increase knowledge increase sorrow.”

--

--

Kester Ratcliff

Lapsed biologist retraining as a social data scientist, often writing about refugee rights advocacy and political philosophy.