What is ‘propaganda’?

comparing & evaluating different definitions of ‘propaganda’

I wrote the first edition of this article in 2018, and I’ve updated it many times since. In this update (Feb 2021), I added a discussion on intentionality~stochasticity, based on Asmolov (2019) and my own observations, and I’ve found more to read. This is an evolving record of my reading and thinking. The most famous history of propaganda, by Jowett and O’Donnell, is now on its 7th edition and their first edition was in 1986.

When I first wrote this, I was preparing to publish ‘the Assadists Directory’,* a report on 150 pro-Assad regime international propagandists, and I was aiming to preempt them calling the report ‘propaganda’ by defining that clearly, and to be fairly clear what I meant by calling them propagandists.

Developing a clear definition of ‘propaganda’ is a related but different problem to solving ‘what degree of conscious intentionality makes someone who participates in spreading propaganda a ‘propagandist’?’ It is possible to participate in propaganda without any self-awareness of that, and it appears to be a sliding scale, with many convolutions, until full agent.

Contents

Propagandizing about ‘propaganda’
Propaganda and ‘propagandist’
Some popular misunderstandings
· Comparison of definitions of ‘propaganda’
Aristotle
Sacra Congregatio de Propaganda de Fide
Lasswell
Totalitarian propaganda systems
Arendt, Cicero, and Soros
Asmolov — participatory propaganda
Facebook’s definition of ‘Coordinated Inauthentic Behaviour’
· Evaluating definitions and propaganda
Symbolism and metaphor
Metapolitical universals
Emotional dispositions
Identification of opponents
Social Heuristics
Radicalisation and propaganda
Propaganda from civil society — advocacy
‘Propagandicity’ as an intensity of propaganda variable?
Conclusion

The meaning of the term ‘propaganda’ is itself subject to propagandizing. Accusations and counter-accusations of being a ‘propagandist’ are common. In some cases they’re clearly disingenuous*, and in other cases genuine*. It is usually hard for non-specialists in a subject to judge which is which.

The new affordances of the global information space located mainly on social media networks now have changed the way propaganda works. Actual propagandists often use confusion about the meaning of ‘propaganda’ to accuse their enemies of what they are in fact doing (the ‘mirror accusation’ technique*), in order to discredit them preemptively.

Some would like to make ‘propaganda’ mean any State-funded public communications, which is a convenient definition to use in whataboutism or false equivalency tactics. Some make it mean any sort of persuasive communication, which is also useful in false equivalency narratives and can be extended to imply erasing the possibility of objective facts and universal logic as independent from partisan preferences and perspectives.

Lasswell’s definition as ‘manipulation of symbols’ is historically comprehensive and it has the advantage of pointing out the important function of propaganda to instantiate metapolitical universals, but it doesn’t help much to distinguish between better and worse uses of propaganda methods, which is mainly what non-academics want a clear definition for.

A related phenomenon is “coordinated inauthentic behaviour”, which is sometimes glossed as ‘propaganda’ in secondary media reports. I discuss Facebook’s definition of this and how it relates to different assumptions and understandings about the scope of ‘propaganda’ below.

Some people imply much narrower definitions, usually presupposing that ‘propaganda’ is morally negative, something that their opponents do. Very few usages of the word come with any explicit definition.

In Col. V.M. Maksimov’s 1977 thesis, used as a training manual in the SVR and FSB, he discusses how to handle ‘confidential contacts’ — people acting as informants, or occasionally as agents of influence, who are not fully aware or informed by their handlers that they are in contact with Russian intelligence. He discusses when to turn a confidential contact into a full agent, and that they are then given training and a new handler.

What standard of evidence to use to decide whether to believe that someone is just an ideological affiliate or a useful idiot “authentically” creating propaganda for a foreign authoritarian regime or that they are somewhere on the scale towards a self-aware agent with direct contact, coordination and control by that regime, depends on the context and purpose of the judgement call. In the context of counterintelligence, what matters usually is just their behaviour pattern. Intentions might matter in an individual case, but intentions are not directly observable, and when assessing risks of collective systems of manipulation any assessment of intentions would necessarily have to be so generalized that it wouldn’t add much reliable intelligence value over the analysis of behaviour of groups or sub-graphs. In the context of criminal prosecution, what the person understood and intended matters; often it is mens rea, the specific intention of the crime, which determines if they are criminally chargeable or not.

In the context of informing public discussion and evaluating sources, it is usually impossible to know for certain what degree of conscious intentionality a person who is consistently repeating propaganda in favour of a state or other political group has just from the public domain data. The counterintelligence standard of making a judgement call, to a specified degree of probability, on behaviour, not intentions, is more practical in the context of public discussion, but it is not widely accepted. People seem to assume that ‘propaganda’ means only that which is directly connected, coordinated and controlled by a state, and ‘propagandist’ means only those who are fully conscious agents. I think this is because people often have an unrealistic sense of how much conscious intentionality and agency people generally have, so they’re not used to considering how much bad behaviour is motivated not by deliberate malice but by degrees of habitual ignorance.

Informants and agents of influence are usually recruited because they are feel they are failures or low social status but they desperately want to be seen as special or superior, and being in contact with someone who is a foreign state intelligence handler meets their needs.* People with such a narcissistic predisposition often suppress knowledge about themselves if they can’t interpret it to fit the self-meaning which they want, or which their internalized overbearing parent wanted to see of them. Thus they are even more likely than the general population to suppress self-knowledge which does not make themselves look independently important.

Another factor is that with people who are only on the peripheries of a participatory propaganda sub-network it may be counterproductive to identify them with the committed propagandists, because if they feel negatively identified with the group they might commit to it more; this is analogous to the ‘vaccine hesitant’ vs. anti-vaxxers distinction.

Some of the people doing propaganda for the Assad regime I think it’s very reasonable considering all of the public domain evidence to infer that they must have conscious intentions to be agents of influence, but most are probably not fully aware of what they are participating in. As Maksimov says, the relevant thing is that they produce propaganda consistently for the regime and against its opponents, at a significant volume or at important bridging nodes in the network, so I listed them as ‘propagandists’.

To begin to make our discussion about ‘propaganda’ educational and not more propagandizing about propaganda, we have to start with a clear definition — it doesn’t have to be perfect, but it should be clear enough to be falsifiable and open to updating when the environment changes.

The most often misunderstood point about ‘fake news’ or disinformation currently is that people assume that if a news story has some factual basis used in it it is not “fake”, therefore it is true. If only things were so simple…

Disinformation, or propaganda, almost always begins with a true fact or established opinion, but then uses it to spin a story which as a whole is untrue, irrelevant, taken out of its context in order to make it appear to mean something totally different, or re-framed in a way which is false and manipulative. If you find that the first factual premise (or endoxa) in a piece of persuasive communication is true, that does not prove the rest of it is.

Disinformation can even be fully true insofar as that the particular event(s) really happened as described, but selectively misrepresented as if they are generally representative or typical, without information about frequency or distribution of that type of event compared to general trends or patterns. Good news reporting includes frequency and distribution information.

Specialists distinguish between ‘disinformation’ and ‘misinformation’ — disinformation is layered and deliberate misinformation, almost always starting with a bit of truth to make the whole story seem credible, and misinformation is simply false information, which is more often spread unintentionally, with those spreading it not recognizing that it is false.

The simplification of ‘disinformation’ as ‘fake news’ has advantages and disadvantages. Some disinformation is sheer fakery, not only content, but also the semi-covert coordinated & automated amplification and social-proofing networks. But it conflates disinformation and misinformation, and frames the issue in a way which more of the audience seem to misunderstand it as just about whether news is simply, totally faked or not.

The term ‘fake news’ has also been weaponized, mainly by Donald Trump, as an objectively meaningless, hyperpartisan label for all information coming from politically opposed groups or threatening to the in-group’s political identity cohesion, similar to the Nazi term “Lügenpresse”.

The second most frequent misunderstanding is to assume that more frequently or widely repeated claims or opinions are more likely to be true. Fake social proofing mechanisms are often bigger than actual unique individual humans’ interactions with a topic in the first phase after an issue breaks into mainstream public attention (lag time between specialists receiving news and some of it breaking into the mainstream ranges from a few hours to a few decades, but most often now it takes a few days). So the most frequently and widely repeated opinions are less likely to be true than the unpopular, specialist, harder to find and harder to understand opinions.

Fake social proofing or pseudo-independent corroboration mechanisms means using paid or volunteer trolls, who pretend to be ‘independent’ and may run many accounts each, and amplification bot-nets, to comment and post in groups. Trolls and bots amplify topics and narratives, harass and deter opponents, or distract and discredit them in the eyes of the non-specialist audience who are newly aware of the issue and don’t know enough background to recognise distract and discredit tactics. The next stage of faking social proofing is amplifying narrative themes and frames via semi-covertly connected propaganda outlets, which claim to be “independent” e.g. *, and then automatically linking their posts or pages around many media content aggregators, to optimize for search engines’ ranking algorithms and to feed the bot-nets. These are routine tactics now.

Social proofing of opinions is often systematically faked, and anyway, the popularity of an opinion has no necessary logical connection to whether it’s true. Many established opinions are layered falsehoods — partial lies established upon partial lies, and “it’s turtles all the way down” (Pratchett). A very unpopular example of a popular opinion like this is the ‘US sanctions caused increased Iraqi infant mortality in the 1990s’ myth —

“Iraqis did suffer a lot under the sanctions regime, but the child mortality rate was not highly elevated throughout the 1990s and just before the invasion of Iraq. Nor did the child mortality rate plummet after this invasion.”

The sanctions on Saddam Hussein’s regime in Iraq could have been better specified to minimise the effects on the civilian population, but the claims that there was a large increase in infant mortality consequently or that it was caused by the sanctions regime are false, and, ironically, those false claims were originally made by proponents of the US-UK military action.

Subsequently, the Russian and Iranian regimes’ international agents of influence refer to the popular established opinion that US sanctions caused Iraqi dead babies to justify almost any conclusion. Pro-Assad propagandists, most notably Joshua Landis, currently use the popular opinion that US sanctions are always bad to influence public opinion to remove international sanctions on the Assad regime, even tho the latter are much more specifically targeted, and the food and fuel shortages cannot plausibly be causally connected to the international sanctions on regime officials.*

The Iraqi dead babies myth is a case of the ‘rotten fish’ propaganda technique: i) evoke outrage, fear or disgust, ii) direct those visceral feelings into a metaphorical imaginary context where the meaning and action you want to manipulate people towards reproducing “authentically” appears to make sense, iii) most people will not pause to think whether it’s actually true before reacting, iv) the emotional imagery and predisposition persists, so you can re-use it to influence subsequent decisions too. It also works with accusations of paedophilia, in some cultures homosexuality, and incest.

Fear of strangers is another strong emotion often used in propaganda. E.g. Nigel Farage’s Brexit campaign poster in 2016 showed the mass migration of mostly Syria, Afghan, and Iraqi, refugees, across the West Balkans. The poster framed people as a feared out-group, and implicitly claimed that remaining an EU member state had any relevance to whether people in need of international protection and migrating irregularly across borders, because they were not allowed any viable legal alternative, would arrive or legally must be accepted in the UK. Brexit made no difference to the rate of refugees arriving in the UK or to the UK’s international human rights legal commitments, except the EU Charter of Fundamental Rights. Emotional, social cognition occurs before conscious reflective thought might occur, so manipulative fear-based propaganda techniques are very powerful.

Even specialists who see propaganda everyday can fail to think clearly about how well their assumed implicit definitions fit all their observations. They are like Darwin on his first visit to Cwm Idwal, before he read the theory of glaciation. He had seen all the things, but had not yet seen the patterns of what they meant or composed them into a coherent theory.*

Propaganda has really changed. The audience now participate more directly, and it is less directly centrally coordinated, more noisy and more stochastic. Its increased complexity and stochasticity confuse people about how much the main patterns are really state-influenced. More than looking for direct connections, we should be looking for the earliest influences.

While reading around to do this update on intentionality~stochasticity, I found: Garth Jowett and Victoria O’Donnell (2018) Propaganda and Persuasion, 7th ed., Zbyněk Zeman (1982) Art & Propaganda in World War II, and (1962) Nazi Propaganda, and Richard Nelson (1996), A Chronology and Glossary of Propaganda in the United States, in three volumes. I haven’t read them yet, but I’ll try to include them in my next update of this article.

Comparison of definitions of ‘propaganda’

The simplest common starting point is that propaganda is a sub-category of persuasive communications. My working definition before I read more was: ‘propaganda’ == unreasonable means of persuasion, i.e. bad.

Aristotle did not use the word ‘propaganda’ (which is Latin, not Greek), but he did write on rhetoric. Rhetoric = persuasive communications. Aristotle’s Rhetoric was studied in Europe for many centuries, and he taught that rhetoric can be used for good or bad purposes. One possible way to interpret Aristotle’s definition of rhetoric is that bad rhetoric = propaganda, if you’re assuming that ‘propaganda’ is inherently a bad thing.

Aristotle’s theory is a practical framework, so long as you’re reflecting on mainly face-to-face communications. When communications become more abstracted, delocalized, and there is more potential for divergence of power relations between the communicants and the subjects they discuss, then there are more factors to consider and more layers of complexity involved.

According to Aristotle’s discussion of rhetoric, persuasiveness depends on:

  1. the perceived character of the speaker,
  2. the emotional disposition of the audience, and
  3. the relevance and clarity of the rhetorical proof.

Any of these three factors can be falsified, intentionally or not.

To falsify the emotional disposition of the audience means to arouse an emotion which does not realistically relate to the subject of the argument.

Aristotle taught that rhetorical techniques can be misused deceptively, and if he were alive today he might call that ‘propaganda’, in the bad sense. His theory of rhetoric is a practical framework to ethically analyse what differentiates good and bad rhetoric, or good and bad propaganda.

A rhetorical proof (enthymeme) is a sub-category of logical syllogism which has to be premised on an established opinion (endoxa) in the target audience, not facts or inferences only known to a more specialist audience.

I would add to Aristotle’s account of ethical use of rhetorical techniques that good style should not only be clear and appropriately dignified in form, but it should also be suitable to induce an emotional disposition in the audience which reflects their personal interior freedoms of intellect, conscience and will, and promotes a sense of responsibility to judge fairly (perhaps this is what he meant by ‘dignified’). Thus it should refrain from inducing emotional states which would diminish the audience’s interior freedoms or mislead them into alienating their personal responsibility to an authority.

A argument presented in a way designed to induce an inappropriate fear and to use that fear to bias people to believe in a false causal connection or moral attribution and then to act on it, is unethical rhetorical technique.

Rhetoric can also induce states of epistemological alienation, either despair or self-indulgent irresponsibility. Both involve imagining that we each live in a unique private universe where ‘reality’ is just a projection of our feelings, a world only shared with our ideological compatriots, a narcissistic world. I discuss this more under ‘Identification of opponents’.

Aristotle’s theory is a practical way to understand persuasive communications in face-to-face relations, but when communication gets more abstracted away from direct, shared experience, with more inequitable dynamics between those who communicate and those who are represented and objectified, then abstraction has more potential to become delusional or deceitful, and more prone to conflictual framing in propaganda. I think this is essentially why the introduction of printing presses in Europe triggered about 500 years of viciously ideological wars and violence, and why in this early phase of internet globalisation there’s such a divergence between those who have more power to communicate and those who are represented in objectified ways. Hopefully with an injection of understanding we can prevent more violence.

The original context of the term was probably in the name of the Sacra Congregatio de Propaganda de Fide — abbreviated to ‘the Propaganda’, and now renamed as the ‘Sacred Congregation for the Evangelisation of Peoples’. It is a Vatican curia (court) with responsibility for oversight and support of missions in areas which do not yet have an episcopal diocese. In that context, ‘propaganda de fide’ (propagation of the faith), was something those who engaged in it believed was truthful & ethical. Also, implicit in this usage is that what was propagated was a whole belief system, which conditions personality and politics, instantiated in each particular symbol.

Thomas Aquinas synthesised Aristotle and Catholicism and made a humanistic metaethical theory, which I think still basically works well, and maps easily onto modern in-/out-group social psychology (Shkurko, 2014).

Thomistic eudaemonic ethics sees the ultimate good as restoring the balance and integration of everything that is real, rather than conceptualising it as purification from ‘evil’, as if evil were an independently really existing thing itself (which would contradict Genesis 1:31). Thomistic ethics means:(i) distinction between partial, relative goods versus ultimate, common goods; (ii) intending for a partial good in an excessive way which harms a common good is ‘evil’; (iii) the Thomistic’ theory of evil is that- evil is subjectively real, but objectively it is an unreality, or a lack of realism, in epistemology, ontology, and metaethics, or in other words, a relative absence of realism in our relationships and in our sense of self; (iii) an unjust relationship always depends on an unrealistic understanding of it; (iv) the fault of understanding always comes before the fault of will in ‘deliberate evil’, so no-one ever really fully consciously intends evil, it is always rooted in ignorance. Hence the ‘Tree of Knowledge’ mythical symbol — by sampling particular, partial goods, for oneself, one cannot know in advance whether one will take too much fruit and thereby damage the whole tree’s perfect balance and completeness, and we have an innate predisposition to seek partial, selfish goods, because we do not do not see or comprehend the whole, ultimate Good.

Aquinas’ concept of partial vs. universal goods is easy to relate to in/out-group social psychology, which Asmolov points out gets intensified by reactionary, authoritarian propaganda. The arbitrary preference for partial, relative goods for ‘us’ vs. ‘them’ is what makes bad propaganda bad. Ultimately, there is no ‘them’, only us. And, “We all bleed the same colour.”

The first recurring issue in defining propaganda is usually whether propaganda in general is inherently bad, or is propaganda a neutral tool which can be used for good or bad? In contemporary usage, ‘propaganda’ is almost always assumed to be always more or less bad. That probably comes from its usage in popular discourse during wars, when “propaganda” would usually be short for “enemy propaganda”, but the most comprehensive historical analyses of it have concluded that in general it is not always bad.

Harold Lasswell, a political scientist who studied propaganda used in WW1, defined propaganda as “manipulation of representations”, in the Encyclopedia of Social Sciences 1930–1967, volume XII. He considered it to be “a mere tool .. no more moral or immoral than a… pump handle.”.

Lasswell defines propaganda as the purposeful manipulation of symbols (maybe now he would say ‘metaphors’?) to influence action. His word “manipulation” means arranging symbolic representations deliberately to motivate action, but it is not necessarily manipulative in the sense of manipulating the intellect or will of the reactor. The object(s) of manipulation in Lasswell’s terms is the set of representations, not the intellect or will. This is why he defines ‘propaganda’ as a neutral tool.

Two totalitarian ideological systems in the 20th century used propaganda: Fascism (of which German Nazism was a species) and Communism.

Let us start with the Communist, or to use Lenin’s term ‘Social Democratic’, definition and usage of propaganda, dated 1901, preceding Nazi usage:

These are accounts of Lenin’s theory of agitation and propaganda from sympathisers, so of course they do not highlight the problems.

Lenin’s usage of ‘propaganda’ places the revolutionary aim over the subject(s)’ claims to reality. This is justified by arguing that current realities are so thoroughly conditioned by dominative, exploitative relations that what people subjectively experience as ‘reality’ is just their ‘false consciousness’. The aim of persuasive speech, under this set of assumptions, is an object in the minds of the audience, not responding to the subject(s).

This reduces personal freedom and responsibility of judgement to collective reaction. The ‘person’ becomes merely an unit of a collective, and rightful autonomy and agency is situated only in the collective. Difference is deviance. Dissent is disloyalty to the revolutionary proletariat, or the party.

Lenin apparently would agree that a rhetorical proof should take as its premises established opinions and deduce the intended conclusion.

Lenin distinguishes agitation from propaganda. Agitation works by pointing to an event and a contradiction, then turning the news about the event around a point of real or concocted contradiction into motivation for meaning-making or action in support of the political agenda of the Party.

There is a huge archive of German Nazi and German Democratic Republic (Communist) propaganda here —

A significant number of Holocaust survivors became important philosophers, and there are many good analyses of Nazi propaganda, and of the philosophy and psychology underlying their effectiveness.

My personal favourites are Arendt and Levinas, but they go more into the deeper philosophy of how ordinary people came to commit atrocities.

I would like to read next Zbyněk Zeman’s history of Nazi and Allied propaganda, and Alice Miller’s psychoanalytic history of Hitler, For Your own Good (1982).

Arendt considered the most dangerous kind of propaganda is that which doesn’t so much aim to persuade people of anything in particular but more aims to induce a despairing or self-indulgent disbelief in facts and logical truthfulness as real possibilities, which predisposes to authoritarianism.

“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.” — Hannah Arendt.

Hannah Arendt nevertheless finds value in what Cicero said about the features in common between political lying and the kind of freedom of imagination in constructing new meanings which is necessary to act politically to create a paradigmatically or revolutionarily different future.

Arendt’s writing is always hard to summarize, but the most relevant point here is: there is a range of imaginative construction of possible future realities which is neither dishonestly nor delusionally unrealistic, nor is it coercively manipulative to the intended audience’s freedom of will, intellect or conscience, but it is both representative and (re)constructive of reality, in that it constructs metaphorical representations of patterns and processes of life, bigger than any single point of view in a particular place and time could see, and it works with reflexivity. Constructing a representation of reality is an art, which inevitably involves choosing which elements to represent and which to leave out, in a theoretically abstract and simplified model of how reality works, so theory not only represents but also creates new possibilities.

I think her argument here is similar in principle to George Soros’ (2013) argument in ‘Reflexivity and the Human Uncertainty Principle’.

Soros’ argument is that the process of scientific hypothesis formation, prior to testing or defining scientific ‘laws’ or causal regularities, is inevitably also somewhat artistic and ethical. Experimental scientists develop a technical sense, which is empirically underdetermined, about how to choose which factors to investigate or model. The ‘independent’ variables in social science are often constructs, not directly observable or material things, and statistical tests of construct validity of theories as wholes are rare.

The empirically underdetermined choices in hypothesis formation and defining measures of immaterial phenomena are inevitably carried out under conditions of complex uncertainty, partly because of how economic social realities are reflexively influenced and shaped by economic theories. Given the inevitabilities of complex uncertainty and reflexivity, scientific methodology cannot be morally neutral. So we should choose hypotheses which at least do not reflexively influence realities in socially harmful ways. Scientific hypothesis formation and theorizing should, like all the civil professions: “First do no harm” (Primum non nocere).

The uncertainty and reflexivity involved in social theorizing is related to Cicero and Arendt’s point that persuasion which aims to affect people’s basic implicit values and assumptions about life, which are usually unconscious, should be ethical in its means as well as in its ends, because the means instantiate the ends, via implicit metapolitical universals. What metaphors or linguistic frames we choose, how we simplify reality to fit our minds, can potentially (re)construct social realities. First do no harm.

What Lasswell and Arendt missed, because it wasn’t such a big problem yet then, is the complex interaction between intentionality and stochasticity.

Asmolov’s updated theory of participatory propaganda in the new social media dominated public sphere is the most convincing I’ve read so far.

Some people assume that for something to be ‘propaganda’ it must be directly connected to a state, with coordinated, strategic intentions. Such a definition can be used to catch all public broadcasting, and any civil society organisation which receives any state funding or access to information, in the sense of ‘propaganda’. It is mostly used to diminish the recognition of international propaganda by authoritarian regimes, most of all Putin’s Russia, or else unwittingly ‘both-sides-ing’ an issue unrealistically.

Assuming that ‘propaganda’ must be directly connected to a state also artificially excludes participation by followers in authenticating propaganda narratives, which have often been introduced and shaped primarily by state actors. Stochastic replication and development by followers, in a complex, mostly decentralized, networked system, does not really mean that the main patterns of topics, themes, and frames, have not been introduced and selectively amplified by state intelligence agencies.

Most iterations of a narrative framing or opinion probably are authentically expressed by non-state authors, but most of those authors have already been manipulated into believing so by state actors. The earliest inputs are most effective at shaping collective behaviour patterns in evolving complex systems. The simple state-/non-state-origin distinction is impossible to maintain in such a complex and noisy environment, at large scales.

This is why I advise: rather than looking at whether most of the network-actors in a system are directly connected to a state or not to determine whether what they are doing should be called “propaganda”, look at the earliest influences in the discussion network, and were they state-connected, online or offline, recently and historically or only historically?

Cases when propaganda is simply either a direct result of state control or content produced by followers which coincidentally aligns with state propaganda themes and frames are probably about as rare as single gene determined traits (almost all biological traits are determined by, often extremely complex, gene*environment interactions). The more stochastic content production by followers which gets selectively amplified has been shaped by state-coordinated propaganda online and offline for hundreds of years before, and usually the basic themes and frames were introduced by agents of influence, who are amplified by massive state-controlled bot-nets.

The stochasticity of content, like apparently ‘independent’ corroboration, does not really mean that it is authentic, or coming from un-manipulated beliefs. ‘Authentic’ content production that is ideologically aligned with state interests mainly comes from earlier iterations of state-controlled propaganda.

Which narrative themes and frames can possibly achieve virality depends directly on stochastic content produced by follower and indirectly on the long history of state propaganda developing their ideological narratives and emotional personal and political predispositions. Relatively stochastic content produced by followers and directly state-coordinated propaganda content are interdependent, and they’re both constrained by hysteresis between past and present iterations of the system. We may be only able to collect 7 days worth of data from the Twitter public API, but the phenomena we’re studying have hundreds’ of years of historical conditioning, and the participants draw on that in the metaphors which they are predisposed to imagine, which frames they consider plausible or identity-reinforcing, and which strike them as opposed.

What is called ‘inauthentic behaviour’ by the major social media platforms, refers to coordinated or automated distribution of disinformation and network structure manipulations, especially by states. Why they keep on being too late and too lax to deal with radicalization to violence on their platforms is because they’re relying too much on the ‘authentic’ term.

Participatory propaganda is mostly authentic, in the short-term view. The fraction of participatory propaganda which it is possible to prove is ‘coordinated inauthentic behaviour’ is only a minimal part of it.

Asmolov touches on how the interests’ of reactionary state propagandists disturbed by losing control over their ‘national’ information spaces converge with the interests of platform companies to maximise engagement and size.

“propaganda has become less interested in changing people’s opinion about a specific object or in convincing people that it is either truth or fiction. The main purpose of 21st century propaganda is to increase the scope of participation in relation to the object of propaganda. In a digital environment relying on user participation, propaganda is a technology of power that drives the socialization of conflicts and a tool for increasing the scope of contagion. While participation in political debates is often considered to be an important feature of democracy, propaganda allows us to define the structure and form of participation in a way that serves only those who generate propaganda, and minimizing the constructive outcomes of participation. In addition, the focus on propaganda as a driver of participation could be considered a meeting point between political and commercial interests, since increasing engagement with a given object of content is a path towards more pageviews and more surrender of personal data. In that sense, propaganda serves not only the political actors, but also the platform owners.” — Gregory Asmolov (2019), The Effects of Participatory Propaganda.

Another way in which the commercial interests of platform companies converge with the interests of identitarian, authoritarian propagandists (i.e. those who design propaganda campaigns, not those who merely participate without a strategic overview of what they’re doing) is that consumerist identity is functionally convergent with identitarian epistemology, ontology and metaethics. The consumerist way of forming an identity consists of expressing an imaginary identity one would like others to see, and choosing consumer objects that reinforce that identity’s cohesion, whether they are material objects, or immaterial objects, including information, opinions, social categories, political identity categories, etc. The architecture of the current social media platforms is built assuming a consumerist way of forming an identity and consistently enculturates that globally. So the platforms’ affordances effectively selectively favour Identitarian narratives.

Identitarianism was originally a far-right ideology, but of course it has been copied across with “left-wing” aesthetics by leftists who practice solidarity with “actually existing socialist states” (Sic), rather than actual people.

“When we take down one of these networks, it’s because of their deceptive behaviour. It’s not because of the content they’re sharing. The posts themselves may not be false and they may not go against our Community Standards. We might take a network down for making it look like it’s being run from one part of the world, when in fact it’s being run from another. This could be done for ideological purposes, or it could be financially motivated.”

The Venn diagram of the overlap between ‘Coordinated Inauthentic Behaviour’ and ‘propaganda’ of course depends on what definition of ‘propaganda’ you’re assuming or consciously choosing. Most usages of the term ‘propaganda’ appear to have not clearly thought about what it means.

Facebook’s application of their term ‘Coordinated Inauthentic Behaviour’ is fraught with arbitrariness. It attempts to side-step the hottest political controversies about false and hyperpartisan content, what ‘freedom of speech’ should mean in the context of huge disparities of access to broadcasting and state-sponsored automated amplification systems, but by taking a minimalist, ‘neutral’ definition, they appear to be unable or unwilling to recognise coordinated networks of automated amplification or social proofing of false or selectively misrepresented, hyperpartisan content until years after many non-industry researchers have pointed it out. They seem to be stuck in a reactive mode and reacting too slowly in most cases.

Evaluating definitions and propaganda

In this section I sketch out some aspects of the context of phenomena which may be considered ‘propaganda’ by the various definitions above, to clarify why I recommend evaluating the definitions how I do.

In Lasswell’s definition‘propaganda’ has a similar meaning to ‘symbolism’. Lasswell appreciated how propaganda instantiates universals. Symbolism consists of metaphors. Lakoff and Johnson (1980) argue that metaphor is fundamental to all human languages, and cognitive-linguistic psychology.

‘Metaphor’ has a precise meaning in cognitive-linguistic psychology: a metaphor has a source domain and a target domain. The source domain is what it literally refers to. E.g. in “a wave of immigration” — the source domain is the image of a wave in the sea, and immigration policy is the target domain.

How people mentally process metaphors involves evoking some feelings related to the source domain and then applying them to deciding on something in the target domain. This cognitive-emotional process is often used in propaganda, and the ends and means of using it can be good or bad.

Authoritarian propaganda often uses metaphors of disease, parasites, natural disaster, invasion, rape, incest, paedophilia etc.. Genocidal propaganda typically refers to the target group as pests: “rats”* or “cockroaches”*. The primary emotions associated with the source domain are disgust and fear.

Subtler authoritarian propaganda refers to a group of people, usually foreigners and non-citizens, as a “wave” or some other inanimate, non-human object, which may be threatening*. The aim in such cases seems to be to induce emotional and moral indifference to the subjects and thus to legitimise treatment that is indiscriminately unjust or cruel by neglect to the victims. E.g. in EU official statements about refugees — whenever they want to publicly legitimise even harsher policies, they refer to refugees using metaphors which collectively reduce human persons to some inanimate or inhuman object.

Perhaps part of why Westerners seem to find it hard to discriminate propaganda and to interpret complex symbolic metaphors is because our culture since the Enlightenment has suppressed all sorts of symbolic communication other than literal rational prose as inferior or primitive.

Complex layered symbolism, embodying emotions as well as concepts, originally was, and still tends to be, religious. Religious practice is one of, if not the main, area of human life in which complex layered symbolism or ‘manipulation of representations’ to affect or persuade an audience occurs. As far as I know, the areligious forms of political symbolism post-Enlightenment are derived from the previous religious forms.

Liturgy, preparative ritual action, as a practice, can sensitize us to implicit meaning-making and instantiation of universals in particular signs. Poetry, closely related to and overlapping with religious liturgy, is another good way to sensitize people to this deeper, emotional, implicit layer of communication.

Many years ago, when I was studying Buddhist history, I was impressed by a story in the Mahavamsa about a mango tree branch overhanging a monastic community boundary, which supposedly ‘legally’ justified the king in suppressing a peasants’ revolt*. Even something in itself trivial can implicitly instantiate basic, universal assumptions about (i) how we can possibly know anything, (ii) in what way we exist in the world, and (iii) how we should decide what’s right. Such basic universal assumptions vary as wholes. Epistemologies change first, so are the most fundamental. Most people are not conscious of their basic assumptions most of the time.

There is cross-cultural evidence for the theory of instantiation of values as a universal human psychological process —

Cross-cultural empirical tests of instantiation theory —

Bardi, et al. 2018, Cross-Cultural Differences and Similarities in Human Value Instantiation, Front. Psychol. https://doi.org/10.3389/fpsyg.2018.00849

(I don’t agree with the circumplex model in this paper. The ‘openness — conservatism’ axis seems quite arbitrary, whereas SASB’s ‘friendly — hostile’, or ‘appetitive — aversive’, axis seems to me a simpler description.)

Applying this to the methods of persuasive communications, the method of persuading people to believe that X is true or to trust Y or to act Z instantiates a whole way of deciding how we can possibly know anything about what is true. When the factual and judgemental claims in a piece of communication are fairly explicitly stated and the evidence which convinced the one trying to persuade the other is explained, that is implicitly an act of autonomy-giving. It allows the other person to be Other, and respects the unpredictability of their freedom of intellect, judgement, and will. It sets up a relationship which is not arbitrarily unbalanced — the one attempting to persuade might know more, but by laying out the evidence and reasons, they don’t demand deference, they give the Other respect for their autonomy, and demand responsibility.

Kahneman’s (2011) Thinking, Fast and Slow, suggests that we have two parallel thinking systems — fast, socially emotional and reactive judgements, and slow, cognitively expensive, individual reflective thought. Both are necessary, in balance. It’s a mistake to privilege individual reflective thought over social heuristics, because without heuristics our reflective processes get overwhelmed and we make even more irrational decisions.

Democratic societies depend on maintaining a balance between social heuristics, mainly, trusting specialists and the specialization of societal functions a fair amount, and individually checking on authorities and majority opinions, and a range of honest, reasonable disagreements.

When we need to do individually reflective thinking, or cognitive reality testing on our group level social emotions and heuristics, we need to be in fairly balanced, calm, and individually action-focused affect states. Conflict-framing and escalating hostility does not help promote individual reflection.

Good, humanizing, propaganda evokes calm and balanced emotional states. By giving autonomy and positively enabling the audience’s interior freedoms, it evokes ethical responsibility. It doesn’t only aim to persuade them of the particular point(s) as efficiently as possible, by whatever means necessary, because that would require avoiding activating their reflective reasoning, to trigger a reaction. A cognitively democratic society requires an optimal balance between fairly trusting specialists who have more knowledge and responsibility, but not collectively delegating too much to authorities.

Good propaganda also balances seeking trust and autonomy-giving, and it doesn’t set up arbitrary or artificial power imbalances between the speaker and audience, or between the audience and the subject(s).

As Asmolov says, identifying groups by what the individuals in them have actually done, fairly specifically, doesn’t escalate conflict-framing or disconnective actions. Disconnective actions are a problem because they can lead to epistemological isolation and ‘garbage in, garbage out’ thinking.

Another way to think about identifying opponents is via psychotherapy theories, scaled up from the interpersonal to the political scale.

Personality differences and interpersonal dynamics also find expression in metapolitical basic assumptions, which can be analysed as sets of epistemological, ontological and metaethical (implicit) assumptions.

My preferred model of personality differences and dynamics is the SASB Circumplex*. It has three dimensions — (i) focus on Other, (ii) focus on self, and (iii) introjected self, and within each dimension there are two axes of interpersonal dynamics — (i) control or enmeshment vs. autonomy-giving or differentiation, and ii) friendly vs. hostile, or appetitive vs. aversive.

Social environments where the self which others see, especially authorities, matters much more than the self one authentically is, where one has to conform totally to expectations or else be rejected, create a personality injury in which the ego one presents to others has unbalanced relationships with one’s inward sense of self and the sense of self internalized by one’s earliest object-relations dynamics. ‘Narcissism’ is an unhelpful label for this pattern of self constructs because it deters people from paying attention to what it points to, and its origins. Political environments with strongly arbitrary patterns of economic and political power relations create narcissistic-authoritarian personalities and worldviews, so they tend to perpetuate themselves.

“Man hands on misery to man, it deepens like a coastal shelf…” Larkin.

Political narcissism also has arrogant and depressed forms, which usually alternate, or present differently in public vs. in private.* Narcissistic epistemology leads to a sense of ontology where the shared objective world does not exist, facts do not exist independently and self and Other groups are not connected by universal logical reasonableness. Hence, differences necessarily become divisive, politics becomes all about eliminating opponents, which logically culminates in genocide.

People with a narcissistic personality injury and narcissistic-authoritarian metapolitics tend to project the characteristics they want to eliminate from their egos onto their opponents. A high-profile example of negative projection is Jim Watkins, the probable founder of QAnon and owner of 8chan and 8kun far-right websites, who has used ‘save the children’ anti-paedophilia sentiment to unite people into the QAnon movement, but it appears highly probable he has traded in child sexual abuse pornography —

What psychologists call ‘instantiation’* at larger social scales is probably the same thing as what psychotherapists call ‘copy processes’ or internalization at the interpersonal scale. Individual psychodynamic analysis digs deeper, into earlier strata of personality, than social psychology at larger social scales.

Some propaganda doesn’t work on the level of fact claims at all. E.g. Q drops — they don’t say much which could be easily falsified by events, they’re really bad poetry — morally and stylistically bad, but powerful. What Q drops do consistently is to instantiate Dark Triad-like (narcissism, machiavellianism, and psychopathy) personality traits and metapolitics.

Good propaganda — advocacy from civil society, and good poetry*, can counter those processes. The problem is how it scale it up efficiently.

Another aspect of popular discussions about ‘propaganda’ is about the validity and ethics of using social cognitive heuristics — ‘who is this coming from and how much do I trust them?’— to avoid cognitive overload to the point of being unable to process rationally when it’s necessary. We cannot actually make ecologically rational decisions efficiently enough to survive and reproduce if we attempt to make every decision individually rationally. It is an inevitable part of the human condition that we have to use social heuristics, filter and forget, preferably with clear boundaries of when we stop heuristically processing and switch to the more cognitively costly rational processing.

Social heuristics certainly are fallible, as is every other means of knowledge. Even ‘pure reason’ is only as good as its inputs, otherwise it’s just garbage in ~ garbage out. Wisdom, however, is knowing the limits of all means of knowledge, and when, how much, they’re probably reliable, and especially when to watch out for errors.

In life, we actually need both good enough accuracy and efficiency of cognition. Perfect cognition at the speed of a snail to a cheetah’s problem is not viable. Speedy social heuristics alone in a highly complexly social animal’s life would quickly get us into escalating conflicts and declining group size, which is hardly survivable and would negatively impact our reproductive fitness, fast, so we’ve evolved balancing mechanisms of individual ecological rationality to check on what’s socially communicated to us via language.

‘Guilty by association’ is a logical error which occurs when we take social cognitive heuristics to excess, and don’t switch to individual rational processing in time. On the other hand, if we don’t do enough social cognitive heuristics — ‘who said this? how much do I trust them?’ we get overwhelmed, fail at filtering the information inputs to our rational processing, and then we end up with garbage opinions and dysfunctional activities.

How do people actually become ‘radicalised’, into anti-democratic or reactionary forms of Islamism, or into any other extreme ideology, or what does that word even mean? These words are often used to demonise others.

Asmolov also found it important to talk about in-/out-group psychology and conflict-framing of narratives, but didn’t go into more depth about it, or how the new forms of participatory propaganda and radicalisation relate.

‘Radicalisation’ etymologically does not necessarily mean something bad. It could have meant ‘back to the roots’ in a positive sense, rediscovering the better meanings of traditions before they got instrumentalized and perverted for the legitimation of domination and exploitation. In current contextual usage, however, it has come to mean: psychological and behavioural preparation for inter-group conflict, especially indiscriminate violence or deliberately targeting non-combatants in order to terrify a political community into acting or reacting in a way the actor wants or finds politically useful (i.e. terrorism). Thus it is apparent that radicalisation and propaganda are related phenomena.

In authoritarian propaganda, ‘radical’ often means any anti-authoritarian opponent, regardless of the civilian vs. combatant distinction. Vagueness is used tactically in authoritarian propaganda to provoke fears and then apply them to an out-group or enemy target, to manipulate meaning-making or action decisions by the in-group or to recruit new followers/ participants.

Apparently there isn’t a standard definition of ‘radicalisation’, so I suggest:

1) Alienation from one’s natal or home society — feeling disillusioned, losing trust in its authorities and majority opinions. Thus far it is not necessarily bad. Enculturation into radically different worldviews, possibly through travel or by studying historical, religious or philosophical texts, can produce a sense of alienation from one’s original culture, but it usually makes people more open-minded and friendly to more different other groups of people, not less.

2) Feeling uniquely victimised by a partially imagined total enemy, or by the whole world. If people have embodied traumatic experiences, they will be more predisposed to feeling this way. Over-generalizing and catastrophizing are common in PTSD. Catastrophizing cognitive distortions also tend to project more malicious intentionality than is actually the case.

This is a bifurcation point. Alienation plus overgeneralizing and catastrophizing can more easily lead to preparedness for intergroup conflict, but it’s possible and it does happen in some cases, altho it requires deliberate effort, to redirect into to more openness and valuing listening to strangers. In this sense, positive radicalisation also exists — I can cite myself and Emmi Bevensee* as examples — we both exited our previous enculturated worldviews due to transformative experiences of listening to Syrians.

A possible intervention point is to culturally promote the value of deeply listening to strangers, aiming to make it at least as widely and highly valued as the value of free speech. This could greatly help to redirect vulnerabilities and potentialities for negative radicalisation into positive forms. A shared metaphorical image to start with could be the story of Abraham meeting the three strangers (Genesis 18:1–8,22.). This story is the oldest shared cultural root of approximately 4.1bn people, 53% of global population, whose three different traditions are often instrumentalized in negative radicalisation propaganda against each other. The first image in that story is that Abraham’s response to seeing three strangers approaching is he kisses the ground and runs out to greet them, then shares a meal with them. And when the three strangers had left, “he remained standing in the presence of the Lord.”

Radicalisation propaganda aims to reduce the recognition of real ambiguity and ambivalence in the meaning-making and action decisions of its target audience about its target out-group or enemies. The psychological capacity to hold ambivalent evaluations and affects about others is part of a healthy personality (Nancy Mc Williams), but is reduced by trauma adaptations. Collective traumas are hardly ever recognised, and only a rich minority of people suffering post-traumatic syndromes ever get medical treatment.

Humanistic journalism promotes understanding others realistically, including their real ambiguity and ambivalence. Journalism can go too far in presenting ambivalence and misrepresent situations as if they’re more balanced in power dynamics and proportions of actual harms caused by different actors than they actually are, called ‘both-sides-ing’ an issue.

Factors which seem to make a difference which way people go include —

  • Social mixing vs. homophiletic group isolation — if traumatized people are isolated, the risk of them interpreting their shared experiences through overgeneralizing and catastrophizing/ demonizing, are higher than if they’re socially integrated with un-traumatized or recovered people.
  • There are no total enemies, at least not at group level, so to construct the idea of a total enemy group requires some sort of conspiracy theory or superstitious thinking. Superstitious thinking means accepting a causal explanation without substantiating evidence because it makes one feel fearful, and on average it’s adaptive to err on the side of believing and reacting to perceived threats even tho sometimes they will not be real.
  • Traumatized people are more susceptible to superstitious thinking because of increased adaptations for fight, flight or freeze responses. PTSD isn’t ‘abnormal’ psychology, it’s an adaptation to living in a super dangerous environment, which lasts unpleasantly long, but until people readapt to a more secure and less conflictual environment, they are more susceptible to negative radicalisation by exaggerated conflict-framing of news.

A dissenter in the in-group has more potential to destabilize the group’s cooperation in warfare than one of the enemy has, so dissenters are usually punished even more severely than enemies. Members of the in-group compete among themselves to perform in-group loyalty by participating in meaning-making and action decisions, til eventually their fear of collective punishment overrides their individual cognitive reality-testing. At this stage, it could be aptly named a collectively psychotic group identity.

The enemy group has to be imagined as a total, unambiguous, enemy, in order to justify indiscriminate warfare, not sparing non-combatants. If the enemy group were imagined as ambivalently not totally bad, or maybe persuadable, then the group preparing for war would less likely cooperate and win. So the evolutionarily successful strategy in war is to imaginatively homogenise the enemy group and to suppress individual cognitive reality-testing about ‘them’. Of course we can consciously and rationally adjust for that, but it takes effort.

Reversing negative radicalisation leading to inter-group warfare back to step 2 and redirecting people into a more realistic, positive radicalisation is possible, but it doesn’t scale up as easily or efficiently in the current environment as radicalisation propaganda leading to war or elimination of the Other. Treating de-radicalisation like addiction recovery and psychotherapeutic intervention in personality structures would be a better starting point than demonizing them in a way which effectively just reinforces their basic assumptions.

The biggest and most important challenge of our times is to creatively develop ways of scaling up deradicalisation, or redirecting radicalisation processes collectively and as efficiently as possible, into more open, more realistic, more humane and less conflictual forms. My hunch is the AI poetry generators could be a good starting point for the strategy and tools to do that. To fully explain it would require an article as long as this one again, and it’s probably more than one lifetime’s work, so I’ll leave it now.

Similar to Asmolov, I believe that the good kind of propaganda, or the sub-category of propaganda which is ‘advocacy’, is that which originates from civil society — i.e. those groups and networks of groups which are focused on beneficial activities distributed (at least mostly) indiscriminately with regard to social categories of race, ethnicity, religion, etc., and quite independently from state and market dynamics. I’m also inspired by Joseph Brodsky’s philosophy of poetry, which highlights how poetry can communicate directly to each person, with the power to dissolve social categorisation and to inspire humanizing understandings of others’ subjective experiences, and imagining and interpreting how they feel and think into dialogue with one’s own cultural context. I agree, but I would go a step up from that to look for online communities which are organised around civil society meaning-making and activities, not just individuals, but civil society activity-based online communities and networks that engage in meaning-making and action decisions which humanize others and are open to difference and pluralism.

Civil society kind of interactions and relationships are more often localized, more often based on face-to-face communication before online, so the interpersonal relationships they form tend to produce what are called ‘democratic values’ — autonomy-giving to others, mutual aid, & a fundamentalist kind of egalitarianism with regard to intrinsic human dignity.

In contrast, the kind of persuasive communications arising from state and market social entities, with more or less arbitrarily unbalanced power relations within and between them and other social entities, have vested interests in not listening to, not understanding, and not representing others realistically or fairly, because their power and wealth depends on arbitrarily hierarchical and disproportionate relations, so they produce more or less bad propaganda.

Defining good and bad, or worse and better, forms of propaganda by their origins has the advantage that we can know exactly and objectively from whom persuasive communications originally come, and we can know what ethical kind of relationships they have with other groups and with universal goods. ‘Worse and better’, in my mind, applies to democratic vs. authoritarian states. ‘Good and bad’ refers to civil-society-based vs. states’ propagandas. The most important way a democratic state can protect itself from authoritarian manipulative propaganda is to protect the independence and resilience of the civil society that democratic values emerge from, and hence its institutions.

We cannot know directly the intention of another person in their mind, so defining ‘propaganda’ by the intention to manipulate, even if it may be true, is impractical as a definition of a measurable variable. (Like Avishai Margalit’s point, in Decent Society (1998), that inherent human dignity may be true, but it is unmeasurable from the outside, whereas decency in relationships is measurable and defensible in law.) So defining ‘propaganda’ by intentions is a useless way of defining it for the purposes of objectively measuring it.

However, measuring the community structure of the network and what information circulates around which sub-networks, very clearly shows us who is claiming what, and what sort of moral judgements and political commitments they are trying to persuade their audiences of, & how.

If we have a metaethical theory about how democratic values emerge, from which kinds of interactions and relationships, like I argued above, then we can make reasonable inferences from social network community structure analysis to evaluating the persuasive communications instantiating different systems of metapolitical universals coming from different sources.

There also appears to be variation in the intensity of propaganda, i.e. how much metapolitical universals instantiation is implicitly, unconsciously, implied or mimicked to elicit followers to ‘independently’ authenticate it.

I first noticed this variation in intensity of propaganda-ness, or ‘propagandicity’, when reading the deceased journalists Patrick Cockburn and Robert Fisk, as a mature adult. I grew up reading them in my teens, when they wrote about the war in Iraq in The Independent. I had no clue at the time that they were fabulators, who dealt in reassuring their political niche audiences of the rightness of their prejudices and worldviews, using the suffering of their subjects to justify their in-group audiences’ comfortably ignorant smugness and willingness to sacrifice masses of people for ‘anti-imperialist’ solidarity with states, as long as those states performed anti-Americanism.

I first noticed high propagandicity in how Fisk hid his main causal and moral attribution claims in the adverbs connecting obvious truths or popular opinions with real consequences that were not really, plausibly or evidently, causally or morally connected in the way he was depicting them. It was as though the main point of his opinion columns, and usually the only new thing in them, was hidden, implied in ways which only specialists in that subject, with more background knowledge than the general audience who mainly read opinion articles by generalist opinion writers, would ever notice them.

We might be able to code and quantify ‘propagandicity’, by sentiment and syntax analysis, to automatically evaluate the intensity of propaganda.

  • Lowest intensity of propaganda: there is inevitably always some minimal statistical sampling bias on the information about a situation, but to the extent that the communication instantiates empiricism (facts exist) and rationalism (reason is universal) as an orientation to the world, and lays out the facts first, implying people generally can be trusted (humanism) and people deserve the freedom to judge for themselves (democracy), it may be ‘propagandising’ a set of values and assumptions, but a good set.
  • Maximum intensity of propaganda: uses factual information only instrumentally, to propagate its values set and narrative, and exclusively to reinforce loyalty in meaning-making and action decisions to the authorities and majority opinions of the in-group and hostility in meaning-making and actions towards their out-group(s) or enemies.

The lowest intensity of propaganda still propagates a values set, but in ways which set up egalitarian relations (or, not unreasonably or arbitrarily unbalanced, if they allow for different specialisations) between the speaker and the audience(s), and between the audience and the subject(s).

Asmolov’s theory is the best updated, but it underestimates the factor Lasswell highlighted of how propaganda implicitly propagates or instantiates metapolitical universals, which apply to all subsequent decisions, not just the immediate particulate one.

I’ve seen some patently absurd propaganda — e.g. “Mystic Baba Yaga predicts…” (it was on sott.net in 2016), which I can’t believe is even intended to persuade the audience of the predictions, but it makes sense if it’s for entraining a whole way of thinking.

So I return to the oldest ‘propaganda de fide’ sense of the word — propaganda is that which propagates a whole system of values and basic assumptions, especially epistemological assumptions, and in this sense, Lasswell’s sense, it is not necessarily or always bad. Advocacy can be considered a sub-category of propaganda, but a good kind of it, originating from civil society.

Using social network analysis to map the community structure of the network and keep track of which persuasive communications are coming from which sub-network would give us a reasonable approximation to ‘whose propaganda is this?’, and from that we can reasonably evaluate how much to trust it.

My other minor doubt about Asmolov’s (2018) theory is that I’m not convinced people always do in-/out-group objectification and social categorical thinking, but as I explained above, the earliest stages of radicalisation can possibly bifurcate in a good direction too.

Lapsed biologist retraining as a social data scientist, often writing about refugee rights advocacy and political philosophy.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store