Dirk Meulenbelt’s avatar

Dirk Meulenbelt

@dirk-meulenbelt​·​Joined Aug 2024​·​Ideas
 User
Registered their account.
 Initiator
Started their first discussion.
 Novice
Posted their first idea.
 Critic
 Copy editor
Created their first revision.
 Defender
 Beginner
Posted their 10th idea.
 Engager
Participates in three or more discussions.
 Intermediate
Posted their 50th idea.
 Assistant editor
Created their 10th revision.
 Private
 Shield

  Dirk Meulenbelt commented on criticism #4393.

But this sounds like you’re saying justificationism is necessarily the same as foundationalism. Whereas in #4392 you agreed it’s only a kind of justifiationism.

#4393​·​Dennis HackethalOP, 1 day ago

Why does this sound like I am equating them?

  Dirk Meulenbelt commented on criticism #4383.

Dirk writes:

Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.

I’m not sure foundationalism and justificationism are quite the same thing.

From BoI ch. 1 glossary:

[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.

Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.

I could see foundationalism being a flavor of justificationism, but not the same thing.

#4383​·​Dennis HackethalOP revised 1 day ago

I’m not sure foundationalism and justificationism are quite the same thing.

You are right. Foundationalism is a kind of justificationism. The secure foundation is a kind of justification.

I will have to rewrite this in my article.

  Dirk Meulenbelt commented on criticism #4386.

Just because Dirk’s notion of justificationism breaks with BoI’s doesn’t mean Dirk is wrong. BoI could be wrong.

#4386​·​Dennis HackethalOP, 1 day ago

Indeed. Justification without finality is fake.

"X is true because of Y, but we can discuss Y"

Is functionally the same as

"X is true and we can discuss why"

  Dirk Meulenbelt posted idea #4372.

Explanatory knowledge consists of statements. Statements are at least in part explicit. Therefore inexplicit explanatory knowledge is not possible.

Entirely explicit explanatory knowledge is not possible either, as all knowledge refers to other knowledge implicitly.

  Dirk Meulenbelt addressed criticism #4359.

Not all cases of wanting more of something are cases of addiction.

I want to buy a second chair because I enjoy the first one, not because I cannot help but buy another.

Getting customers addicted means making it so they cannot exercise their free will (or have serious trouble doing so). They’re effectively unable to criticize ‘buy another’ as a course of action.

#4359​·​Dennis Hackethal, 4 days ago

Getting customers addicted making it "so they cannot exercise their free will" denies human creativity, and opens the door for all sorts of draconic laws where people are "protected from themselves".

  Dirk Meulenbelt criticized idea #4068.

Those who advocate making most/all drugs illegal tend to think alcohol should remain legal, despite alcohol having many of the same problems as drugs.

#4068​·​Benjamin DaviesOP, 29 days ago

Making alcohol illegal has been tried and was disastrous. Drugs are already illegal, which is arguably also disastrous. Those who advocate MAKING most drugs illegal but not alcohol are, I think, people who want to outlaw weed.

  Dirk Meulenbelt posted idea #4343.

Drugs are currently illegal, and though drug-related deaths have gone down recently, in the US, they were at an all time high. Drugs being illegal does not seem to deter drug use enough, to off-set drug user's ability to use legal recourse, proper testing, and other such benefits of (legal) society.

  Dirk Meulenbelt addressed criticism #4137.

Drugs are a net negative for society.

#4137​·​Benjamin DaviesOP, 28 days ago

Drugs are too broad of a category. Is widespread cocaine use the same as occasional magic mushrooms? The latter is suggested to have neuro-protective benefits.

  Dirk Meulenbelt addressed criticism #4131.

Getting someone hooked on an addictive substance to get repeat business is predatory. It’s not an honest way to do business. Even if consuming drugs was legal, maybe the selling of drugs should still be illegal.

#4131​·​Dennis Hackethal, 28 days ago

Subjectively applies to every good product that makes its purchasers want to buy more of it. Like good food, video games, comfortable chairs.

  Dirk Meulenbelt commented on idea #4067.

Not prohibited by law.

#4067​·​Benjamin DaviesOP, 29 days ago

To produce, purchase, sell, or to use?

  Dirk Meulenbelt addressed criticism #4060.

If they violate rights they should be punished by the law, that applies regardless of if they take drugs or not.

#4060​·​Benjamin DaviesOP, 29 days ago

If the drug + violation becomes a pattern, it's rational to outlaw it. (Assuming the outlawing works.)

E.g. alcohol is prohibited for drivers, even for drivers who are great drunk drivers.

  Dirk Meulenbelt addressed criticism #4337.

Communities could exclude drug users.

#4337​·​Dirk Meulenbelt, 5 days ago

In today's society they only have this ability to a limited degree, and would still have to deal with the drug users in public.

  Dirk Meulenbelt addressed criticism #4336.

Violating the rights of other people depends on whatever their rights are. If we replace it with "desires", or use a libertarian way of saying "aggress on", then it's really just up to the people. I'd rather not live around drug users (depending on the drug), even if none of them physically assault me. I.e. "violation" is subjective, and ultimately decided by the polity that creates the laws.

#4336​·​Dirk Meulenbelt, 5 days ago

Communities could exclude drug users.

  Dirk Meulenbelt criticized idea #4058.

All drugs should be legal because people have a right to do what they want, as long as it isn’t violating the rights of others.

#4058​·​Benjamin DaviesOP, 29 days ago

Violating the rights of other people depends on whatever their rights are. If we replace it with "desires", or use a libertarian way of saying "aggress on", then it's really just up to the people. I'd rather not live around drug users (depending on the drug), even if none of them physically assault me. I.e. "violation" is subjective, and ultimately decided by the polity that creates the laws.

  Dirk Meulenbelt started a discussion titled ‘Objective Knowledge’.

Does it exist?

The discussion starts with idea #4335.

Knowledge can exist outside any mind. A book contains knowledge whether or not anyone reads it.

  Dirk Meulenbelt added USD 300.00 to the bounty for idea #3069.
  Dirk Meulenbelt revised criticism #4295.

Turned the hypotheses into predictions


Criticism 1: The Decomposition is Arbitrary

The Popper-Miller theorem works by splitting any hypothesis h into two pieces and then showing the evidence always hurts one of them. The entire argument rises or falls on whether that split is the right one. This is the most common objection in the literature.

Say your hypothesis is "it will rain tomorrow" and your evidence is "the barometer is falling." They split the hypothesis into:

  • "Rain OR barometer falling": the part that overlaps with the evidence
  • "Rain OR barometer NOT falling": the part that "goes beyond" the evidence

The evidence trivially supports the first part. But it hurts the second: you now know the barometer IS falling, which kills the "barometer not falling" escape route, so the whole thing narrows to just "rain", a harder path than before. Popper and Miller call this second part the "inductive content," show it always gets negative support, and declare induction impossible.

But this is not the only way to carve up "it will rain." You could split it into

  • "rain AND barometer falling" OR
  • "rain AND barometer NOT falling"

And now the evidence clearly boosts the first piece. Or you could not split it at all and just ask: does a falling barometer raise the probability of rain? Yes. That's inductive support, no decomposition needed. Only Popper and Miller's particular carving guarantees the "beyond" part always gets hurt.

So why this split? Their rule: the part that "goes beyond" the evidence must share no nontrivial logical consequences with it. The "beyond" part and the evidence must have absolutely nothing in common*. The only proposition satisfying this is (h ∨ ¬e), which forces the decomposition and makes the theorem work.

Philosopher Charles Chihara argued this rule is way too strict. Consider:

  • Hypothesis: "All metals expand when heated"
  • Evidence: "This rod is copper"

Together these yield: "This copper rod will expand when heated." Neither alone tells you that. It clearly goes beyond the evidence. But under Popper and Miller's rule it doesn't count, because it shares a consequence with the evidence (both mention this copper rod). Chihara's alternative: k "goes beyond" e if e does not logically entail k.

Under this looser definition, the negative support result disappears. He published this with Donald Gillies, who had earlier defended the theorem but agreed the decomposition question needed revisiting. (Chihara & Gillies, 1990, PDF)

Ellery Eells made a related point: look at "rain OR barometer NOT falling": it welds your weather prediction to the negation of your barometric reading. That's not a clean extraction of "the part about rain that has nothing to do with barometers." It's a Frankenstein proposition the algebra created. Eells argued this assumption has been "almost uniformly rejected" in the literature. (Eells, 1988, PDF)

Criticism 1: The Decomposition is Arbitrary

The Popper-Miller theorem works by splitting any prediction h into two pieces and then showing the evidence always hurts one of them. The entire argument rises or falls on whether that split is the right one. This is the most common objection in the literature.

Say your prediction is "it will rain tomorrow" and your evidence is "the barometer is falling." They split the prediction into:

  • "Rain OR barometer falling": the part that overlaps with the evidence
  • "Rain OR barometer NOT falling": the part that "goes beyond" the evidence

The evidence trivially supports the first part. But it hurts the second: you now know the barometer IS falling, which kills the "barometer not falling" escape route, so the whole thing narrows to just "rain", a harder path than before. Popper and Miller call this second part the "inductive content," show it always gets negative support, and declare induction impossible.

But this is not the only way to carve up "it will rain." You could split it into

  • "rain AND barometer falling" OR
  • "rain AND barometer NOT falling"

And now the evidence clearly boosts the first piece. Or you could not split it at all and just ask: does a falling barometer raise the probability of rain? Yes. That's inductive support, no decomposition needed. Only Popper and Miller's particular carving guarantees the "beyond" part always gets hurt.

So why this split? Their rule: the part that "goes beyond" the evidence must share no nontrivial logical consequences with it. The "beyond" part and the evidence must have absolutely nothing in common*. The only proposition satisfying this is (h ∨ ¬e), which forces the decomposition and makes the theorem work.

Philosopher Charles Chihara argued this rule is way too strict. Consider:

  • Prediction: "All metals expand when heated"
  • Evidence: "This rod is copper"

Together these yield: "This copper rod will expand when heated." Neither alone tells you that. It clearly goes beyond the evidence. But under Popper and Miller's rule it doesn't count, because it shares a consequence with the evidence (both mention this copper rod). Chihara's alternative: k "goes beyond" e if e does not logically entail k.

Under this looser definition, the negative support result disappears. He published this with Donald Gillies, who had earlier defended the theorem but agreed the decomposition question needed revisiting. (Chihara & Gillies, 1990, PDF)

Ellery Eells made a related point: look at "rain OR barometer NOT falling": it welds your weather prediction to the negation of your barometric reading. That's not a clean extraction of "the part about rain that has nothing to do with barometers." It's a Frankenstein proposition the algebra created. Eells argued this assumption has been "almost uniformly rejected" in the literature. (Eells, 1988, PDF)

  Dirk Meulenbelt revised criticism #4306. The revision addresses ideas #4289 and #4310.

Switched the prediction for an explanation. Looks even gayer now.


The Popper-Miller Theorem

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Popper-Miller Theorem

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "swans are white because the genes controlling feather pigmentation in the swan lineage produce only white melanin." This is an explanation: it tells you why swans are white, not just that they are. It also predicts that the next swan you see will be white.

You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed the theory's prediction for this case.
  2. The inductive piece: "and the reason it's white is a genetic mechanism that applies to all swans, including the ones I haven't looked at." This is the actual explanation — the part that would represent learning something new about the world.

They proved mathematically that piece 2 — the explanation, the part that matters — always receives zero or negative support from the evidence. The only work the evidence ever does is confirm the prediction it directly touched. It never reaches the explanation behind it.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the theory. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence — the part that would still need to be true even if the evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the theory that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component — the explanation, the mechanism, the part that would represent genuine learning about the unobserved — is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply Ahe theory. So you want to ask: 'The part of the theory that's not implied logically by the evidence – why does our credence for that go up?' Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

  Dirk Meulenbelt commented on idea #4312.

Yeah explanations answer ‘how’ or ‘why’ questions. Popper wrote:

In seeking pure knowledge our aim is, quite simply, to understand, to answer how-questions and why-questions. These are questions which are answered by giving an explanation. Thus all problems of pure knowledge are problems of explanation.

Karl Popper, Objective Knowledge, chapter 7

‘All swans are white’ is like saying 2 + 2 = 4. It predicts a result given a theory of addition. It does not state the theory.

More about explanations

#4312​·​Dennis Hackethal, 12 days ago

I think you correct. It's still a testable hypothesis. How would you suggest I rename it?

  Dirk Meulenbelt commented on criticism #4310.

Say your theory is "all swans are white."

That doesn’t sound like a theory. It sounds like a prediction/statement.

#4310​·​Dennis Hackethal, 15 days ago

Why not? Does an explanation need a "because".

  Dirk Meulenbelt updated discussion ‘Arguments against Bayesian Epistemology’.

The title changed from ‘Arguments against Bayesian Epistemology’ to ‘Arguments Against Bayesian Epistemology’.

  Dirk Meulenbelt revised criticism #4300.

The Conjunction Problem

Deutsch also offers a separate, more intuitive argument: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

The Conjunction Problem

Deutsch also offers an argument against Bayesian epistemology: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

  Dirk Meulenbelt revised criticism #4301.

The Popper-Miller Theorem

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used this exact math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Popper-Miller Theorem

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

  Dirk Meulenbelt updated discussion ‘Arguments Against Bayesian Epistemology’.

The ‘About’ section changed as follows:

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

  Dirk Meulenbelt revised criticism #4302.

Unnecessary quotation mark


Bayesian epistemology never said contradictory theories are useful together. It says they can't both be true simultaneously, and they can't. That's why physicists are looking for a unified theory. p(T₁ ∧ T₂) = 0 is the correct answer. It would be a bug if it were anything else."

Bayesian epistemology never said contradictory theories are useful together. It says they can't both be true simultaneously, and they can't. That's why physicists are looking for a unified theory. p(T₁ ∧ T₂) = 0 is the correct answer. It would be a bug if it were anything else.