How to communicate about information environmental filtering effects / biases more effectively?
I’m still shocked how many people still don’t realise how much the information they see is filtered according to their social background.
The two main sets of factors shaping your information environment are:
A) Where you originated or first entered the social network (for both offline and online networks), and
B) your interaction habits and associations formed since then, even more so on social media than in broadcast media.
Together these mostly determine what you will and won’t see represented.
(In social network theory, (A) is called propinquity factors and (B) is called homophily factors, but you don’t need to remember the technical terms.)
Unless you take a careful, proactive and strategic approach to seeking unbiased information on an issue you care about (and it’s probably impossible for one human to manage this sustainably for more than a few issues, because it’s so effort intensive in this environment), you can’t reasonably believe that what you’ve seen and heard represented is a realistic summary picture.
The information you’re presented with, especially more so on social media, is selected and ranked to fit your consumer preferences, to attract your attention to an advertising space, not to show you an unbiased sampling of reality.
Facebook (and probably Twitter, but I don’t remember seeing Jack say so) believe that a good information space is a “marketplace of ideas” and that somehow the mythical Invisible Hand will guide aggregate consumer preferences to mostly choose to share and believe truth more than falsehood. There’s no evidence or good reason to assume that aggregate consumer preferences will choose to propagate truths more often than falsehoods.
This basic point about this information environment is so important, and yet it still doesn’t seem to have got through to most people. How on earth can we communicate it in a way that most people will understand and remember it?
When we talk about the ‘filter bubble’ effect, it seems as though people assume that it’s everyone else’s problem, and their own means and limits of knowledge are just fine . How to get people to accept that acquiring consistently accurate knowledge on any issue requires effort and time, time to pay attention and search usually costs money, so, unless you’ve invested those, it’s very unlikely that what you think you know is actually realistic?
Social media companies apparently set out with the intention to create a more egalitarian information space — less gatekeeping, so, they assumed, less bias.
Lack of structures and institutions for supporting collective memory and stable reputational networks for assigning costs and benefits to credibility and non-credibility make it more costly (so costly it’s impossible to maintain) for information consumers to do due diligence on information they’re presented with before sharing or believing it. The amount of leisure time, education and skills needed to be able to efficiently enough check sources before believing or sharing or reacting to content is effectively more privileged now, not less.
To acquire consistently accurate knowledge of any issue over time via social media requires even more effort, time and investment of resources than via broadcast media, so it is effectively a more elitist information space, not less.