John Horgan, a teacher at Stevens Institute of Technology in the United States, introduces Bayes’s theorem in an article in Scientific American.
Bayes’ theorem is named after its inventor, 18th century church minister Thomas Bayes, and is a method for calculating the validity of beliefs based on the best available evidence. It can be simply described as “Initial belief plus new evidence = new and improved belief.” A more detailed version is:
The probability that a belief is true given new evidence equals the probability that the belief is true regardless of that evidence times the probability that the evidence is true given that the belief is true divided by the probability that the evidence is true regardless of whether the belief is true.
The form of the basic mathematical formula is:
P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.
Horgan describes the effective application of Bayes’ theorem, but warns that it “is just a codification of common sense.” He further alerts that it is open to abuse and can facilitate confirmation bias. He states that:
Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe.
In a subsequent article, Horgan discusses the two-day meeting titled “Is the Brain Bayesian?” that he attended at New York University. Some scientists propose that our brains employ Bayesian algorithms, and the meeting aired a range of questions in regard to this.
Horgan summarises the perspectives of the first two speakers at the meeting, who in turn provided overviews of the pros and cons of the Bayesian brain hypothesis.
The first speaker, Joshua Tenenbaum of MIT, argued that the Bayesian model they have been building is superior for modelling cognition, with their research published in the journal Science.
The second speaker, Jeffrey Bowers, a psychologist at the University of Bristol, reprised a 2012 paper he had co-written, titled “Bayesian just-so stories in psychology and neuroscience”. He argued that nearly any cognitive task can be replicated by Bayesian models, and that they are immune to falsification because they are so flexible.
Horgan lends his support to Bowers’ argument, saying that:
…the Bayesian-brain thesis can be boiled down to a dubious syllogism: Our brains excel at certain tasks. Bayesians programs excel at similar tasks. Therefore our brains employ Bayesian programs.
There are obvious limits to this logic. Peregrine falcons excel at flying, and so do F15 jets. No one claims that peregrine falcons must therefore employ jet propulsion, because any fool can see that the mechanics of peregrine and jet propulsion are utterly unlike. If the analogy between our brains and Bayesian machines isn’t self-evidently foolish, that’s only because the mechanics of our cognition remain largely hidden from us.