Sander van der Linden
Foolproof - Why We Fall for Misinformation and How to Build Immunity

Comparing misinformation to a virus, as Sander van der Linden does in Foolproof - Why We Fall for Misinformation and How to Build Immunity, is a smart move. After Covid-19, most of us are now armchair virologists who like to think we understand infection, inoculation, immunity and R-numbers. The comparison is not just a gimmick. It holds up well and provides an integrating metaphor for the duration of this volume.

The author is Professor of Social Psychology in Society at Cambridge University, but Foolproof is not an academic tome. It’s more like a self-help guide, with chapter titles such as ‘The truth is out there’ and ‘How to inoculate your friends and family’. For someone who has spent a career researching fake news, the author - who has been called a cognitive immunologist - wears his learning lightly, making this an easy read.

The host

Although the title promises to show us how fake news might dupe only the fool, we learn early on that none of us is immune to it. We, too, can be fooled by misinformation or disinformation, as the first part of this valuable book sets out to explain the defence weaknesses of the host, us. We are all susceptible to fake news, “including trained experts such as myself,” the author disarming yet unsettlingly admits. This has been known for some time, apparently. The philosopher and mathematician Blaise Pascal is quoted:

People almost invariably arrive at their belief not on the basis of proof but on the basis of what they find attractive. Blaise Pascal, On the Art of Persuasion; 1658.

Knowledge is also “community-based”. “We endorse a false claim because we deem the source to be credible”. We’ve been here before, of course. (See my blog post Truth sidelined by tribal epistemology, this site, for a deep-dive into this subject, after having read a dozen or so books on the subject.) The binary ‘with us or against us’ holds tenacious sway. It’s almost as if the host is diseased even before the virus has arrived. What hope is there that immunity can be built?

The disease

We learn various things that we perhaps already suspected, especially that the misinformation disease has a “predictable signature”. There are “recurring commonalities” in the narrative of, for example, conspiracy theories. They mostly assume “evil or nefarious intentions”. There are, we are told, “virtually no positive conspiracy theories”.

Research into Twitter-launched conspiracy theories shows a linguistic style. Anger, negativity and swearing are commonplace, as is discussion of other groups and power structures. Conspiratorial views often appear paranoid. A distrust of officialdom is a key ingredient. Those not with the project, not ‘in the know’ are sheeple who have been duped by the orthodoxy of ‘mainstream media’. Knowing these patterns should help us identify misinformation as it heads toward us.

Vectors of transmission

With Covid, it took us a while to understand that respiratory droplets were the main vector of transmission. That came long after we had all been wiping door handles. For misinformation, the vectors of transmission tend to be Facebook, Twitter, YouTube, Truth Social (the clue’s in the name, see), WhatsApp, TikTok, Fox News and Telegram. WhatsApp’s popularity, due in large part to its end-to-end encryption, gives it huge reach, with over 2 billion users in India. Mob lynchings in Tamil Nadu were triggered by fake news being spread on the platform, something that WhatsApp is aware of and working on: the author has worked with them on countering such proliferation.

Twitter has some 330 million active monthly users worldwide, giving misinformation an industrial reach via views, likes and shares. Research into ‘rumour cascades’ on Twitter has demonstrated that falsehoods travel “faster, deeper and more broadly” than true claims (in all categories of information). Falsehoods are retweeted more than truths (70% more), with the latter taking six times as long as the former to reach a target audience.

Over on Google and YouTube (2 billion monthly users worldwide), where the main concern is maximizing engagement to ensure that adverts are eyeballed, recommendation tools have been shown to provide “an easy gateway to extremism”. With Facebook (a source of ‘news’ for many people), based on recent modelling, without intervention, anti-vax discourse will dominate within the next ten years. Those of us interested in these trends aren’t perhaps surprised. Are we relieved that there are clever people thinking - and doing something - about it?

Although one of my favourite quotes (from US Army colonel turned historian Robert Bateman, that “once every village had an idiot. It took the internet to bring them all together”) never fails to raise a laugh, the tragedy remains that of social media’s plague of unverified identity. (The Clean up the internet movement would help redress this.) Not only can anyone pretend to be anyone else, remaining unaccountable for any inflammatory or merely unsubstantiated assertion, the real problem is one of scale brought about by automated bot postings. Until this is dealt with, transmission rates for all sorts of information viruses will remain high and eventually become stratospheric. Add AI to that - again at scale - and a new era will possibly be born.

Elon Musk’s flogging of little blue verified account id icons, trumpeted on 28th March 2023 and enacted shortly after is not the way to go. At $8 a month, this is an easy spend for any bot account holder, merely making more money for the platform and its owner. Why not require proper verification (not just payment) from all the platform’s users? It’s not as if the announcement didn’t acknowledge the problem.

Until quite recently, the best defence against misinformation (and even disinformation - see below) was thought to be rational argument. Karl Popper’s model of conjecture and refutation seemed to be enough, something that I myself subscribed to. If one demanded of those refuting current scientific orthodoxy (in, for example, vaccination) that they marshalled the same discipline in their attempted debunk as had been applied in the established conjecture, it was assumed that this would be enough. Appeals to reason have an illustrious history, surely? We now know that this often isn’t sufficient. The zone has been flooded with Bannon shit, and confirmation bias means that the more you push the rational line, the less the appeal to reason works. Social networks really do provide the perfect vector for demonstrating Pascal’s attractiveness-rather-than-proof proposition. Something else is needed

The inoculation

One of the hallmarks of this volume is its density of research references. A 39-page bibliography and a further 23 pages of notes buttress much of the book’s detail (without in any way slowing down one’s ability to follow the argument). Many of the contentions set out by the author have been put to the test. “Even-keeled debate” (using Popperian principles of conjecture and refutation), as exemplified by the even-handedness of the BBC and their inclusion of contrary opinion, termed “false balance”) has been demonstrated to pretty much guarantee an increased belief in the conspiracy-driven, non-scientific view. The example of anthropogenic climate change is perhaps the stand-out.

The bulk of the book introduces us to the new science of prebunking, where one pre-exposes people to a weakened form of an incoming dose of propaganda so that it stimulates their attitudinal defences. The best-known example of this was on the eve of Russia’s full-scale invasion of Ukraine, when the US and Britain warned the public that they might shortly be exposed to Russian propaganda. We were told that we might see fake videos purporting to show a Ukrainian attack on Russian territory. This disarmed Russia of the value of waving a false-flag.

The question asked is whether this prebunking can be deployed effectively against all bogus science, not just spoof science about the climate or about smoking? Rather than trying to inoculate against specific falsehoods, can one equip people’s defences to block different strains of misinformation? The answer seems to be a cautious Yes. Where fact-based inoculation confers narrow-spectrum refutation of specific falsehoods, technique-based inoculation is a broad-spectrum, tactic-recognition defence. Both types can be administered passively and actively.

Whilst it’s reassuring to understand that defence against misinformation can be mounted, this volume didn’t show how this could be done at scale. One is left with the impression that the best way of doing that is not within social media or public information campaigns but instead through early-years education onwards. The lessons from Finland, Estonia, Latvia and Lithuania, where investment in countering misinformation in their education systems has provided citizens with long-lasting inoculation, is that one really does need to roll the vaccine out early.

Footnote on terminology

Some essential vocabulary:

  • misinformation is false or incorrect information, even if caused through innocent error;
  • disinformation is misinformation with an intention to deceive or harm;
  • propaganda is disinformation deployed in the service of a political agenda, possibly state-sanctioned.

It’s mildly disconcerting that the author then uses ‘misinformation’ in favour of ‘disinformation’ throughout, as he declares, “so as to remain agnostic about intent unless I specifically refer to well-documented disinformation and propaganda campaigns”. One has to remember that.

Cover design

The news-wrapped bomb on the book’s cover seems at odds with the virus-antidote metaphor that gives the content its structure. A hyperdermic and virus graphic might have been more relevant.

[Foolproof - Why We Fall for Misinformation and How to Build Immunity by Sander van der Linden is available in the UK from Hive.co.uk.]