Activity Feed

  Dirk Meulenbelt commented on idea #4289.

Pangram says this idea 64% AI-generated. Is it?

#4289·Dennis Hackethal, 4 days ago

Yeah it's me working together with AI telling it to expand where I don't get it yet and me correcting all the links and me curating the end result.

  Dennis Hackethal commented on idea #4280.

The Popper-Miller Theorem

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used this exact math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Conjunction Problem

Deutsch also offers a separate, more intuitive argument: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

#4280·Dirk MeulenbeltOP revised 5 days ago

Pangram says this idea 64% AI-generated. Is it?

  Dirk Meulenbelt revised criticism #4286.

Made header into a subheader


Placeholder Criticism: The Decomposition is NOT Arbitrary

Deutsch argues the decomposition is not arbitrary: it follows necessarily from the probability calculus itself. He and Leonardis have been working on a paper to make this clearer, noting that "Popper and Miller's two papers on this are very condensed and mathematical and use special terminology they created," which has made the result difficult for others to evaluate fairly. The difficulty of presentation has been mistaken for a flaw in the argument. (Joseph Walker Podcast, Ep. 139)

Deutsch never actually explains why the decomposition is necessary. Therefore this criticism is a placeholder and to be updated once someone finds out his reasoning.

Placeholder Criticism: The Decomposition is NOT Arbitrary

Deutsch argues the decomposition is not arbitrary: it follows necessarily from the probability calculus itself. He and Leonardis have been working on a paper to make this clearer, noting that "Popper and Miller's two papers on this are very condensed and mathematical and use special terminology they created," which has made the result difficult for others to evaluate fairly. The difficulty of presentation has been mistaken for a flaw in the argument. (Joseph Walker Podcast, Ep. 139)

Deutsch never actually explains why the decomposition is necessary. Therefore this criticism is a placeholder and to be updated once someone finds out his reasoning.

  Dirk Meulenbelt addressed criticism #4285.

Criticism 1: The Decomposition is Arbitrary

The objection: The entire theorem rests on splitting a hypothesis h into (h ∨ e) and (h ∨ ¬e) and then showing the second part gets negative support. But why split it that way?

Critics argue this is a choice, not a necessity. Define "the part that goes beyond the evidence" differently and you get different results.

This is the most common objection in the literature. Ellery Eells argued the key assumption has been "almost uniformly rejected," because the propositions generated by Popper and Miller's decomposition contain content from both the evidence and the hypothesis tangled together, so they don't cleanly capture "the part that goes beyond the evidence." (Eells, 1988, British Journal for the Philosophy of Science 39, 111–116 — PDF)

Chihara and Gillies proposed "a new condition on what constitutes 'the part of a hypothesis that goes beyond the evidence' that is incompatible with Popper and Miller's condition, "arguing this refutes the impossibility of inductive support. (Chihara & Gillies, Philosophical Studies 58, 1990 — PDF)

#4285·Dirk MeulenbeltOP, 4 days ago

Placeholder Criticism: The Decomposition is NOT Arbitrary

Deutsch argues the decomposition is not arbitrary: it follows necessarily from the probability calculus itself. He and Leonardis have been working on a paper to make this clearer, noting that "Popper and Miller's two papers on this are very condensed and mathematical and use special terminology they created," which has made the result difficult for others to evaluate fairly. The difficulty of presentation has been mistaken for a flaw in the argument. (Joseph Walker Podcast, Ep. 139)

Deutsch never actually explains why the decomposition is necessary. Therefore this criticism is a placeholder and to be updated once someone finds out his reasoning.

  Dirk Meulenbelt criticized idea #4280.

The Popper-Miller Theorem

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used this exact math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Conjunction Problem

Deutsch also offers a separate, more intuitive argument: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

#4280·Dirk MeulenbeltOP revised 5 days ago

Criticism 1: The Decomposition is Arbitrary

The objection: The entire theorem rests on splitting a hypothesis h into (h ∨ e) and (h ∨ ¬e) and then showing the second part gets negative support. But why split it that way?

Critics argue this is a choice, not a necessity. Define "the part that goes beyond the evidence" differently and you get different results.

This is the most common objection in the literature. Ellery Eells argued the key assumption has been "almost uniformly rejected," because the propositions generated by Popper and Miller's decomposition contain content from both the evidence and the hypothesis tangled together, so they don't cleanly capture "the part that goes beyond the evidence." (Eells, 1988, British Journal for the Philosophy of Science 39, 111–116 — PDF)

Chihara and Gillies proposed "a new condition on what constitutes 'the part of a hypothesis that goes beyond the evidence' that is incompatible with Popper and Miller's condition, "arguing this refutes the impossibility of inductive support. (Chihara & Gillies, Philosophical Studies 58, 1990 — PDF)

  Dennis Hackethal revised idea #4282.

Once this idea is implemented, the ‘Show activity’ button on bounties#show can link to the implementation.

Once this idea is implemented, the ‘Show activity’ button on bounties#show can link to the implementation.

  Dennis Hackethal commented on criticism #2962.

The red ‘Criticized’ label shows how many pending criticisms an idea has. For example ‘Criticized (5)’ means the idea has five pending criticisms.

But if there are lots of comments, including non-criticisms and addressed criticisms, it’s hard to identify pending criticisms.

There should be an easy way to filter comments of a given idea down to only pending criticisms.

#2962·Dennis HackethalOP revised 3 months ago

Once this idea is implemented, the ‘Show activity’ button on bounties#show can link to the implementation.

  Dirk Meulenbelt revised idea #4279.

I removed the implicit link, and I turned the Conjunction Problem into a subheader


The Popper-Miller Theorem

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used this exact math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece #2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Conjunction Problem

Deutsch also offers a separate, more intuitive argument: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

The Popper-Miller Theorem

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used this exact math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Conjunction Problem

Deutsch also offers a separate, more intuitive argument: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

  Dirk Meulenbelt started a discussion titled ‘The Popper-Miller Theorem’. The discussion starts with idea #4279.

The Popper-Miller Theorem

Bayesian epistemology says that knowledge works like this: you have a theory, you see evidence, and the evidence raises your confidence in the theory. That's how you learn. The math behind this is Bayes' theorem, a formula for updating probabilities when new information arrives.

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used this exact math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece #2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Conjunction Problem

Deutsch also offers a separate, more intuitive argument: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.

  • T₁ = quantum mechanics
  • T₂ = general relativity

Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:

p(T₁ ∧ T₂) = 0

Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.

Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.

A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)

  Dennis Hackethal submitted idea #4278.

Double-messaging is risky. There can be times when it’s okay, but need to be careful. https://www.verywellmind.com/double-texting-dos-and-don-ts-8784078

  Dennis Hackethal submitted idea #4277.

Another thing you can mirror is effort. How much effort is someone putting into the conversation? If they’re sending typos, leaving out punctuation, making grammatical mistakes while you put in the effort to make none of those mistakes, there’s an imbalance.

  Dennis Hackethal commented on idea #4275.

Another rule of thumb, I think also from Atomic Attraction: roughly mirror people’s response times. If someone takes days to get back to you, and you answer right away, you come off low value, even desperate.

#4275·Dennis HackethalOP, 6 days ago

Scheduling emails and text messages can help. But you risk sending outdated replies if you get another message in the meantime. I wish there was a feature to automatically cancel a scheduled message.

  Dennis Hackethal submitted idea #4275.

Another rule of thumb, I think also from Atomic Attraction: roughly mirror people’s response times. If someone takes days to get back to you, and you answer right away, you come off low value, even desperate.

  Dennis Hackethal submitted idea #4274.

Should comments be sorted by controversial/uncontroversial first, date second?

  Dennis Hackethal revised idea #4270.

social_intell on IG says the way to distinguish between genuine interest and polite dismissal is specificity.

If someone says ‘keep me posted on that’ or ‘we should hang out sometime’, that’s vague; they’re politely ending the conversation. If you do follow up with them, you’re outing yourself as low value and socially incompetent.

If they really want you to follow up, or if they really want to hang out again, they’ll be specific: ‘let me introduce you to my colleague Peter, he can solve your problem, what’s your email?’, or ‘are you free next Wednesday at 7?’

social_intell on IG says the way to distinguish between genuine interest and polite dismissal is specificity.

If someone says ‘keep me posted on that’ or ‘we should hang out sometime’, that’s vague; they’re politely ending the conversation. If you do follow up with them, you’re outing yourself as low value and socially incompetent.

If they really want you to follow up, or if they really want to hang out again, they’ll be specific and create action: ‘let me introduce you to my colleague Peter, he can solve your problem, what’s your email?’, or ‘are you free next Wednesday at 7?’

  Dennis Hackethal submitted idea #4271.

Daniel Vassallo says to give, give, give, give before you ask. In other words, provide much more value than you hope to get from others. Only then can you realistically expect anything back.

  Dennis Hackethal revised idea #4268. The revision addresses idea #4269.

social_intell on IG says the way to distinguish between genuine interest and polite dismissal is specificity.

If someone says ‘keep me posted on that’ or ‘we should hang out sometime’, that’s vague; they’re politely ending the conversation. If you do follow up with them, you’re outing yourself as low value and socially incompetent.

If they really want you to follow up, or if they really want to hang out again, they’ll be specific: ‘let me introduce you to my colleague Peter, he can solve your problem, what’s your email?’, or ‘are you free next Wednesday at 7?’

social_intell on IG says the way to distinguish between genuine interest and polite dismissal is specificity.

If someone says ‘keep me posted on that’ or ‘we should hang out sometime’, that’s vague; they’re politely ending the conversation. If you do follow up with them, you’re outing yourself as low value and socially incompetent.

If they really want you to follow up, or if they really want to hang out again, they’ll be specific: ‘let me introduce you to my colleague Peter, he can solve your problem, what’s your email?’, or ‘are you free next Wednesday at 7?’

  Dennis Hackethal criticized idea #4268.

social_intell on IG says the way to distinguish between genuine interest and polite dismissal is specificity.

If someone says ‘keep me posted on that’ or ‘we should hang out sometime’, that’s vague; they’re politely ending the conversation. If you do follow up with them, you’re outing yourself as low value and socially incompetent.

If they really want you to follow up, or if they really want to hang out again, they’ll be specific: ‘let me introduce you to my colleague Peter, he can solve your problem, what’s your email?’, or ‘are you free next Wednesday at 7?’

#4268·Dennis HackethalOP, 6 days ago
  Dennis Hackethal submitted idea #4268.

social_intell on IG says the way to distinguish between genuine interest and polite dismissal is specificity.

If someone says ‘keep me posted on that’ or ‘we should hang out sometime’, that’s vague; they’re politely ending the conversation. If you do follow up with them, you’re outing yourself as low value and socially incompetent.

If they really want you to follow up, or if they really want to hang out again, they’ll be specific: ‘let me introduce you to my colleague Peter, he can solve your problem, what’s your email?’, or ‘are you free next Wednesday at 7?’

  Dennis Hackethal submitted criticism #4267.

Composing a top-level idea on mobile is atrocious. Need to scroll all the way down to see the form, the form keeps hiding itself, etc.

  Dennis Hackethal submitted idea #4266.

When somebody asks what you do for a living, there’s two layers to this question, according to IG account social_intell.

One layer is surface: taking the question literally, answering literally like ‘I’m a project manager at company X.'

But social_intell says they’re really gauging your status and whether you extract or provide value. You should explain what problem you can solve for people and what you’re building: eg “I help companies build products people actually want. What about you?”

  Dennis Hackethal commented on idea #4264.

Another rule of thumb: in verbal group conversations, like in Twitter spaces, keep an eye on speakers’ average mic time and try not to go above that. (Realistically, that means undershooting the average, because you’re liable to underestimate your own mic time.) Consistently going above will come off as rambling or dominating.

#4264·Dennis HackethalOP, 6 days ago

I forget if I came up with this myself or if I read this somewhere.

  Dennis Hackethal submitted idea #4264.

Another rule of thumb: in verbal group conversations, like in Twitter spaces, keep an eye on speakers’ average mic time and try not to go above that. (Realistically, that means undershooting the average, because you’re liable to underestimate your own mic time.) Consistently going above will come off as rambling or dominating.

  Dennis Hackethal started a discussion titled ‘Social Skills’.

I’m kind of socially retarded, but explicit study of social skills has helped. Here are some things I’ve learned.

The discussion starts with idea #4263.

I read Atomic Attraction years ago but I remember liking it. I’ve spoken to the author, Christopher Canwell. As I recall, he argues that the ratio between gray and blue text bubbles should be roughly 1:1. As a rule of thumb.

  Dennis Hackethal commented on idea #2753.

Idea: Veritula Articles

Currently, Veritula is a discussion website. I believe it could one day do what Wikipedia and Grokipedia do, but better.

A step towards that would be enabling users to produce ‘articles’ or something similar.

An ‘Articles’ tab would be distinct from the ‘Discussions’ tab, featuring explanatory documents similar to encyclopedia entries, and perhaps also blogpost-like content.

Articles focus on distilling the good ideas created/discovered in the discussions that occur on Veritula.

#2753·Benjamin Davies revised 3 months ago

Another idea: letting users post ideas to their own profile. Such ideas wouldn’t be part of a discussion.