The Post-Truth Society is a Failure to Teach Probability

Many commentators have suggested that recent unexpected events, like the continuing support of US voters for Donald Trump, or victory of the UK Brexit campaign, should be attributed to a new era of human reasoning, where truth is no longer important. Ironically, the real truth is much more subtle, and considerably more damning of the very media who peddle the post-truth myth.

Probabilistically, we may quantify our trust in expert opinion by measuring the chances that some event happens, given that experts have predicted it. Mathematically we write

\displaystyle P(\textrm{trust}) = P(A \textrm{ happens } | \textrm{ experts predict } A)

Now the media advocate that we can estimate this directly. Just think about all the times that experts have made a prediction, and work out how often they got it right. Although this is mathematically correct, it isn’t of much practical use. After all, who wanders round reciting all the predictions that experts have ever made? The times we tend to remember predictions are when they were wrong!

This problem is inherent in the traditional frequentist interpretation of probability, taught in schools. This says that probabilities can be calculated empirically by repeating an experiment many times, and looking at the outcomes. While this works fine for tossing coins, it doesn’t intuitively extend to more complicated situations.

So when the media (and to some extent experts themselves) encourage people to make the estimate directly, we’re bound to make mistakes. In particular, in many cases where we remember predictions, they were exactly wrong! This artificially bumps up the importance of outlier events like the financial crash of 2010. The result is that we put a very low value on our trust in the experts.

Fortunately, there’s a better way of looking at the problem, which is sadly unknown to the vast majority of people. It goes by the name of Bayes’ Theorem. Despite the technical name, it represents a far easier way to quantify uncertainty. What’s more, it lies closer to our intuition that probability should measure how much we believe in something.

Put simply Bayes’ theorem says that our belief that A will happen, given supporting evidence from experts, is the chances that experts predicted A given that it happens, multiplied by our prior belief in A. (Strictly speaking, there’s another factor involved, but we won’t need it). In maths we can write

\displaystyle P(\textrm{trust}) \propto P(\textrm{experts predicted } A \ | \ A \textrm{ happens}) \times P(A \textrm{ happens})

This makes sense – we should update our prior expectations, based on the likelihood that A happens. Of course, some people are so entrenched in their view as to never change their minds. But there are often many “floating voters” who might adopt just such a strategy in forming an opinion, if encouraged to do so! The undecided segment of the population played an important role in the Brexit vote, and may yet become a crucial factor in the US election.

So what’s the advantage of this formula over the frequentist interpretation? Because it sits closer to our intuition, it’s much easier for us to make a good guess. The first term in the product asks us to consider all the times that events have happened, and determine the proportion of the time that experts correctly predicted them. Notice that this reverses the perspective we had in the original direct formula!

This change of viewpoint is great news for our intuition. Most of us can think of various economic or political events that have happened over our lifetime. And while some of these were unexpected (like the financial crash), many others were not – witness the long periods of economic stability and the foregone conclusions of many elections over the past 20 years. So we are likely to say that when an event happens, experts have often predicted it first.

Let’s look at this mathematically. Rather than calculating the probability of trust, which is a slightly wishy-washy concept, it’s better to compare our trust and our doubt. We do this automatically in everyday life, as a sanity check. If we have niggling doubts, it’s probably because we’ve overestimated our trust, and vice versa. In equation form we’ll determine

\displaystyle \textrm{trust factor} = \frac{ P(\textrm{trust}) }{ P (\textrm{doubt}) }

If this is (say) bigger than 3, we should be happy to trust the experts, since it far outweighs any doubts we might have. Let’s suppose that prior to the expert opinion, we have no strong view as to whether A will happen or not. In other words

\displaystyle P(A \textrm{ happens}) = P(A \textrm{ doesn't happen}) = 0.5

Then using Bayes we find that our trust factor is a so-called Bayes factor, namely

\displaystyle \textrm{trust factor} = \frac{P(\textrm{experts predicted } A \ | \ A \textrm{ happens})}{P(\textrm{experts predicted } A \ | \ A \textrm{ doesn't happen})}

We’ve already argued that the term is the numerator is plausibly large. It is also sensible to think that the term in the denominator is relatively small. We’d all agree that major events not happening is rather common. And of all the events that don’t happen, experts don’t often tend to say they will. Of course, there are some doomsayers who are frequently forecasting disaster, but they’re mostly on the periphery of expert opinion.

So if the numerator is large and the denominator is small, we can conclude that our trust factor is quite large indeed. It’s not unreasonable to suspect it’ll be at least greater than 3. With the right intuitive tools, we’ve arrived at a more reasonable level of trust in our experts. Sadly, such arguments are few and far between in a media hell-bent on a “keep it simple, stupid” attitude, and expert spokespeople convinced that “dumbing down” is the only way to communicate. Ironically, this is more alienating and less intuitive than Bayesian logic!

The post-truth society is a myth, created by a media who themselves are confused about probability. More accurately, we are currently living in a pre-inference society. Bayesian estimation has never been adequately taught in schools, and for many years this was no great loss. But in the modern world, with the ubiquitous availability of data, it is imperative that we provide people with adequate logical tools for inferring reasonable decisions.

P.S. For balance, here’s why Bayes isn’t always best!