Bayes for Humans: Updating with Odds and Likelihood Ratios

Published on January 4, 2026
Bayes for Humans: Updating with Odds and Likelihood Ratios

What Bayes is really for

Bayes theorem is a rule for updating probabilities when new evidence arrives. The point is not perfect math. The point is consistent thinking: start from a baseline, then move your probability in proportion to evidence strength.

In forecasting, Bayes helps you avoid two common errors:

Overconfidence: jumping to extreme probabilities on weak evidence.

Underconfidence: refusing to move even when evidence is strong.

The Bayes workflow in one line

Start with a prior probability. Convert to odds. Multiply by a likelihood ratio. Convert back to a posterior probability.

This odds form is often the clearest version for humans:

posterior odds = prior odds times likelihood ratio

Step 1: Set a defensible prior

Your prior should be anchored in a base rate whenever possible. If you skip the base rate, you are forecasting vibes.

A good question to ask is: "In a large reference class of similar events, how often does this happen". That frequency is your starting prior.

Step 2: Convert probability to odds

Odds are a ratio, not a percent. For probability p:

• odds = p divided by (1 minus p)

Example:

• p = 0.40

• odds = 0.40 divided by 0.60 = 0.6667

If you prefer additive updates, odds can be converted to log odds, and the probability transform is logit. See Odds, Log Odds, and Logit.

Step 3: Estimate evidence strength with likelihood ratios

A likelihood is how consistent the evidence is with a hypothesis. A likelihood ratio compares two hypotheses: evidence given YES versus evidence given NO.

Interpretation:

• LR = 1 means the evidence does not change anything.

• LR > 1 pushes probability up.

• LR < 1 pushes probability down.

In diagnostic style settings, likelihood ratios relate to sensitivity and specificity, and the common base rate trap is a false positive rate problem. For MVP, you do not need medical style detail. You need the concept: evidence has strength, not just direction.

Step 4: Multiply odds, then convert back

Continuing the earlier example:

• prior p = 0.40, prior odds = 0.6667

• choose LR = 1.5

• posterior odds = 0.6667 times 1.5 = 1.0000

• posterior p = odds divided by (1 plus odds) = 1.0000 divided by 2.0000 = 0.50

Your posterior probability is now 0.50.

How this prevents overreaction

Bayes forces you to face the baseline. If your base rate is low, even strong sounding evidence may not justify a huge jump. This is the heart of base rate discipline.

It also prevents narrative errors. Instead of "this feels convincing", you ask "how much does this evidence multiply my odds". If you cannot justify a multiplier, your update should be small.

From Bayes to predicted probability

In practice, you may update multiple times as new information arrives. Each update produces a new posterior, which becomes the next prior.

This is how you build a stable predicted probability without confusing confidence vs probability.

Translation to prediction markets

Bayes gives you a probability. Trading requires a cost aware comparison:

• Convert market price to implied probability and confirm price scale.

• Define your fair price and compute edge.

• Require that edge clears break even probability after fees and execution frictions like bid ask spread and slippage.

Bayes improves the quality of your probability. Execution determines whether the probability turns into profit.

Common mistakes

Skipping the prior: you update from nothing and end up anchoring to your own narrative.

Treating evidence as binary: evidence has a spectrum of strength. Use LR language to stay honest.

Jumping to extremes: extreme probabilities create large downside under scoring and often reflect overconfidence.

Trading without costs: even a better posterior does not guarantee positive EV after fees and spread.

Takeaway

Bayes is a discipline tool. Start with a base rate, update with evidence strength, and avoid narrative jumps. If you do this, your predicted probabilities become more stable, more testable, and more likely to translate into fair price and break-even decisions in prediction markets.

Related

Predicted Probability: How to Build a Forecast You Can Trust

Odds, Log Odds, and Logit: One Concept, Three Views

Market Price to Implied Probability: Avoiding 100x Errors

← Back to Guides