Bayesian Believers

This post was originally drafted on 2021-04-03. I'm picking it back up now, on 2022-07-27, after resolving to finish my 100 things challenge.

Julia Galef is the co-founder of the Center For Applied Rationality (CFAR). I was super surprised to see just how long she has been posting videos on YouTube (nearly a decade, btw). I've been following her on Twitter for years, having no idea that she'd basically established herself in the rationalist YouTube scene as the Bayesian Reasoner. Anyways, I binged quite a few of her videos. One of the most interesting ones I watched was called How I use "meta-updating".

Meta-updating is a formalization of the idea that, if I try to convince you (someone I know to be rational) of an argument and fail, then that should tell me that there is a lower probability my argument is actually right. I should temper my belief in my argument accordingly. Additionally, if I still remain adamant about my belief, even after updating its strength, then you (knowing me to be rational) should be less confident in your original stance.

To be clear, "strength" here refers to a Bayesian probability. For those unfamiliar with Probability Theory, a quick recap:

Probability theory deals with randomness and its measurement. The world is extremely complex and chaotic—so chaotic, in fact, that many events appear to happen at random. Take for example a coin flip. If you knew everything about exactly how you flipped the coin, you could calculate its outcome beforehand—it would not seem random at all! But of course, in reality, we rarely have all this information. Probability theory helps us measure the likelihood of events given incomplete information about the world.

When we talk about probability we usually mean so in a frequentist sense. We look at the outcomes of an experiment over many trials and say that the probability of an outcome is its frequency amongst all trials (notice how this constrains the range of probabilities to between 0 and 1, inclusive; an outcome can happen either none of the time, all of the time, or somewhere in-between). To get a better sense of the likelihood, you need to do more trials. If you want to know exactly what that likelihood is, you'd either need an infinite number of trials or complete information about the world. Of course, there are cons to this paradigm of probabilistic thinking. One that most naturally comes to mind is the difficulty in measuring the likelihood of something which can only be measured once; an event which is time-dependent cannot be tried again and again to gain more information about its "true likelihood."

Bayesian probability has a different take on the matter. In short, it sees probability as a measure of the certainty one has about some belief, given their knowledge about the world.  A probability of 0 would represent that the statement is surely false, and 1 that it is surely true. Any number between can be interpreted as a "likelihood of truth." It ditches frequentism for a more knowledge-based approach to probabilistic thinking.

Now, an interesting phenomenon arises when we use the concept of meta-updating to consider a case in which two rational people with opposing beliefs but common knowledge of each others' beliefs try to agree to disagree. Spoiler alert: they can't.

The best colloquial description I've found of how this works is on pages 7 through 10 of Tyler Cowen and Robin Hanson's paper "Are Disagreements Honest?", but I'll give explaining it a shot myself for the sake of keeping you here.

Imagine that John and Jane are both rational people, and know each other to be so. They both witness a robber flee a bank, and begin discussing the robber's appearance so they can be helpful to local authorities. John thinks the robber was 6 feet tall and 230 pounds. Jane thinks the robber was 5'10" and 200 pounds. They both know each other's certainty of their respective beliefs, and begin discussing.

John shares his description to Jane. Jane, knowing John to be rational, becomes a little less sure that the robber was as short as 5'10". John surely must have seen or noticed something that she didn't, which is why he is so confident in his opinion. And so Jane tempers her certainty accordingly.  

However, she still thinks the robber is closer to 5'10" than 6'. John sees this, and updates his belief accordingly: if his confident position barely budged her opinion, then she must know something or have seen something that he didn't. And so John tempers his certainty accordingly.

As long as both John and Jane have full knowledge of each others' opinions and degrees of certainty, they will continue back and forth in this fashion until they eventually converge on the same belief. It is impossible for them to converge on two separate beliefs.

In other words: two ideal truth-seekers with common knowledge of each others' beliefs can never agree to disagree. As long as they are communicating and updating, they are converging towards belief in a single truth. This result is called Aumann's agreement theorem, as it was proven by Robert Aumann in a 1976 paper titled "Agreeing to Disagree."

Aumann's theorem may seem impossible to many, like there must be some loophole or exception which makes it obviously untrue in the real world. Well, as Hal Finney points out, that's because it clearly is.

For one, humans are not at all perfect Bayesian agents with common knowledge of each others' beliefs. Secondly, we suck at updating our beliefs. We are all riddled with a plethora of cognitive biases (including but not limited to overconfidence bias) which cloud our judgement and ability to update on evidence appropriately. However, even after relaxing Aumann's original assumptions, Cowen and Hanson find in their paper that there is still strong reason to believe in his results.

Another loophole one might consider is that the process of iterative belief-updating might take forever, or at least much longer than any human lifetime. Of course, in typical contribute-to-every-interesting-problem-he-becomes-aware-of fashion, Scott Aaronson managed to debunk even this idea.

Aaronson proved that if you want two rational agents to agree approximately and with a given degree of certainty, only a short conversation must ensue. What he proved, more specifically in terms of computational complexity, is that the amount of time it takes for them to agree depends entirely how approximate their agreement ought to be, and how certain they want to be. It does not depend at all on their knowledge.

So where does that leave us? To me, it looks like even more glaring evidence that humans suck at being rational, are full of themselves and their beliefs, and are typically dishonest when disagreeing (whether or not they are conscious of this fact).


That is all! If you enjoyed this, please consider subscribing below.

P.S. as always, please reach out to me if you have any feedback, or just want to chat!