Activity Feed

  Dennis Hackethal started a bounty for idea #4334 worth USD 150.00.
  Dennis Hackethal submitted idea #4334.

autopair.js is bug-free.

  Dennis Hackethal submitted criticism #4333.

When an empty block is passed to render, it results in an empty tag '<>'

  Dennis Hackethal submitted idea #4332.

Some Reagent-like way to make things reactive using proc as first element? And then the server keeps track of which procs have been rendered, which items have changed, and re-renders that part of the template in a turbo stream?

  Dennis Hackethal submitted criticism #4330.

Redirects result in two additional requests, the first of which is a turbo-stream request that renders nothing, thus (presumably) prompting the browser to make another request for the same resource.

This? https://stackoverflow.com/a/74071278

  Dennis Hackethal submitted idea #4329.

Is there a way to teach user-built helpers how to process Hiccdown? Or maybe intercepting capture already took care of this?

  Dennis Hackethal submitted idea #4328.

Could the application layout live in ApplicationHelper#layout?

  Dennis Hackethal started a bounty for idea #4327 worth USD 150.00.
  Dennis Hackethal submitted idea #4327.

Hiccdown is bug-free.

  Dennis Hackethal archived idea #333 along with any revisions.
  Dennis Hackethal archived idea #303 along with any revisions.
  Dennis Hackethal archived idea #300 along with any revisions.
  Dirk Meulenbelt added USD 300.00 to the bounty for idea #3069.
  Dennis Hackethal started a bounty for idea #3069 worth USD 1,000.00.
  Dennis Hackethal revised idea #4038.

How Do Bounties Work?

Bounties let you invite criticism and reward high-quality contributions with real money.

Bounties are in beta. Expect things to break.

How do I participate?

First, log in or sign up.

Next, browse the list of bounties. Click a bounty’s dollar amount to view its page, review the bountied idea and the terms, and submit a criticism on that idea.

That’s it – you’re in.

How do I get paid?

Each bounty enters a review period roughly five days after it starts (the exact date is shown on the bounty page). The review period lasts 24 hours. During this time, the bounty owner reviews submissions and rejects only those that don’t meet the stated terms.

To be eligible for a payout, all of the following must be true:

  1. Your submission is a direct criticism of the bountied idea.
  2. Your submission has no pending counter-criticisms when the review period begins.
  3. Your submission meets the bounty terms and the site-wide terms.
  4. You’ve connected a Stripe account in good standing before the review period ends.

The bounty owner is never eligible to receive payouts from their own bounty.

Note that counter-criticisms are not constrained by the bounty-specific terms. Only direct criticisms of the bountied idea are.

How much will I get paid?

The bounty amount is prorated among all eligible submissions.

For example, if there are ten eligible criticisms and you contributed two of them, you receive 20% of the bounty.

Fractions of cents are not paid out.

How do I run a bounty?

Click the megaphone button next to an idea (near bookmark, archive, etc.).

Set a bounty amount and write clear terms describing the kinds of criticisms you’re willing to pay for. Then enter your credit-card details to authorize the amount plus a 5% bounty fee.

Your card is authorized, not charged, when the bounty starts.

The bounty typically runs for five to seven days, depending on your card’s authorization window. Toward the end, a 24-hour review period begins. During this time, review submissions and reject those that don’t meet your terms. Submissions you don’t reject are automatically accepted at the end of the review period and become eligible for payout. Your card is then charged the full authorization.

If you reject all submissions, your card is never charged.

Can I fund an existing bounty?

Yes. Review the bounty terms. If you agree with them, click the ‘Add funding’ button on the bounty page and follow the next steps. At this point, your card is authorized but not charged.

If the bounty owner accepts any submissions during the review period, your card is charged the full authorization. If he rejects all submissions, your card is never charged.

Funders are never eligible to receive payouts from a bounty they funded.

Start a bounty today. Terms apply.

How Do Bounties Work?

Bounties let you invite criticism and reward high-quality contributions with real money.

Bounties are in beta. Expect things to break.

How do I participate?

First, log in or sign up.

Next, browse the list of bounties. Click a bounty’s dollar amount to view its page, review the bountied idea and the terms, and submit a criticism of that idea.

That’s it – you’re in.

How do I get paid?

The bounty owner reviews submissions for eligibility against his bounty terms.

To be eligible for a payout, all of the following must be true:

  1. Your submission is a direct criticism of the bountied idea.
  2. Your submission has no pending counter-criticisms by the deadline. (For temporary bounties, that’s when the review period ends; for standing bounties, it’s seven days after submission.)
  3. Your submission meets the bounty terms and the site-wide terms.
  4. You’ve connected a Stripe account in good standing before the deadline.
  5. You’ve not contributed funds to the bounty.

The bounty owner is never eligible to receive payouts from their own bounty.

Note that counter-criticisms are not constrained by the bounty-specific terms. Only direct criticisms of the bountied idea are.

How much will I get paid?

For temporary bounties, the amount is prorated among eligible participants based on contribution. For example, if there are ten eligible criticisms and you contributed two of them, you receive 20% of the amount when the bounty ends.

For standing bounties, amounts are assigned on a per-submission basis. For example, funders may indicate that they will pay a total of USD 100 for the first eligible submission, a total of USD 50 for the second eligible submission, and so on. Each eligible submission has its own payout date.

Fractions of cents are not paid out.

How do I run a bounty?

Click the megaphone button next to an idea (near the buttons to bookmark, archive, etc.).

Set a bounty amount and write clear terms describing the kinds of criticisms you’re willing to pay for. Then enter your credit-card details to authorize the amount plus a 5% bounty fee.

Your card is at most authorized, but not charged, when the bounty starts.

A temporary bounty typically runs for five to seven days, depending on your card’s authorization window. You may review submissions during the entire bounty period. Toward the end, a 24-hour grace period begins during which no new submissions can be made but you may continue your review. Reject any submissions that don’t meet your terms. Submissions you don’t reject are automatically accepted at the end of the review period and become eligible for payout. Your card is then charged the full authorization.

A standing bounty runs for as long as funds last. Each submission has its own seven-day review period. Again, reject any submissions that don’t meet your terms. Submissions you don’t reject are automatically accepted seven days after submission. Your card is then charged as indicated in your funding allocation.

If you reject all submissions, your card is never charged.

What’s the difference between a temporary and a standing bounty?

A temporary bounty has a fixed duration, typically between five and seven days. The bounty amount is prorated among eligible participants at the end. Standing bounties, on the other hand, don’t have a fixed duration; they run as long as funds last. Funds are paid out continuously and on a per-submission basis, as described above.

Temporary bounties are ideal when you have limited time and a smaller budget. Standing bounties are ideal for the long term with a larger budget. However, you can mix and match based on your own unique preferences and circumstances: for example, it’s possible to use a larger budget on a temporary bounty.

Can I fund an existing bounty?

Yes. Review the bounty terms. If you agree with them, click the ‘Add funding’ button on the bounty page and follow the next steps. At this point, your card is at most authorized but not charged.

Your card is charged for any submissions the bounty owner does not reject. If he rejects all submissions, your card is never charged.

Funders are never eligible to receive payouts from a bounty they funded.

Start a bounty today. Terms apply.

  Dennis Hackethal addressed criticism #4322.

Criticism 1: The Decomposition is Arbitrary

The Popper-Miller theorem works by splitting any prediction h into two pieces and then showing the evidence always hurts one of them. The entire argument rises or falls on whether that split is the right one. This is the most common objection in the literature.

Say your prediction is "it will rain tomorrow" and your evidence is "the barometer is falling." They split the prediction into:

  • "Rain OR barometer falling": the part that overlaps with the evidence
  • "Rain OR barometer NOT falling": the part that "goes beyond" the evidence

The evidence trivially supports the first part. But it hurts the second: you now know the barometer IS falling, which kills the "barometer not falling" escape route, so the whole thing narrows to just "rain", a harder path than before. Popper and Miller call this second part the "inductive content," show it always gets negative support, and declare induction impossible.

But this is not the only way to carve up "it will rain." You could split it into

  • "rain AND barometer falling" OR
  • "rain AND barometer NOT falling"

And now the evidence clearly boosts the first piece. Or you could not split it at all and just ask: does a falling barometer raise the probability of rain? Yes. That's inductive support, no decomposition needed. Only Popper and Miller's particular carving guarantees the "beyond" part always gets hurt.

So why this split? Their rule: the part that "goes beyond" the evidence must share no nontrivial logical consequences with it. The "beyond" part and the evidence must have absolutely nothing in common*. The only proposition satisfying this is (h ∨ ¬e), which forces the decomposition and makes the theorem work.

Philosopher Charles Chihara argued this rule is way too strict. Consider:

  • Prediction: "All metals expand when heated"
  • Evidence: "This rod is copper"

Together these yield: "This copper rod will expand when heated." Neither alone tells you that. It clearly goes beyond the evidence. But under Popper and Miller's rule it doesn't count, because it shares a consequence with the evidence (both mention this copper rod). Chihara's alternative: k "goes beyond" e if e does not logically entail k.

Under this looser definition, the negative support result disappears. He published this with Donald Gillies, who had earlier defended the theorem but agreed the decomposition question needed revisiting. (Chihara & Gillies, 1990, PDF)

Ellery Eells made a related point: look at "rain OR barometer NOT falling": it welds your weather prediction to the negation of your barometric reading. That's not a clean extraction of "the part about rain that has nothing to do with barometers." It's a Frankenstein proposition the algebra created. Eells argued this assumption has been "almost uniformly rejected" in the literature. (Eells, 1988, PDF)

#4322·Dirk MeulenbeltOP revised 3 days ago

The Popper-Miller theorem works by splitting any prediction h into two pieces…

I wonder if your revision from hypothesis to revision was a bit sweeping.

Do they really argue predictions can be split into two pieces? That doesn’t sound right. But I could see hypotheses being split in two.

  Dirk Meulenbelt revised criticism #4295.

Turned the hypotheses into predictions


Criticism 1: The Decomposition is Arbitrary

The Popper-Miller theorem works by splitting any hypothesis h into two pieces and then showing the evidence always hurts one of them. The entire argument rises or falls on whether that split is the right one. This is the most common objection in the literature.

Say your hypothesis is "it will rain tomorrow" and your evidence is "the barometer is falling." They split the hypothesis into:

  • "Rain OR barometer falling": the part that overlaps with the evidence
  • "Rain OR barometer NOT falling": the part that "goes beyond" the evidence

The evidence trivially supports the first part. But it hurts the second: you now know the barometer IS falling, which kills the "barometer not falling" escape route, so the whole thing narrows to just "rain", a harder path than before. Popper and Miller call this second part the "inductive content," show it always gets negative support, and declare induction impossible.

But this is not the only way to carve up "it will rain." You could split it into

  • "rain AND barometer falling" OR
  • "rain AND barometer NOT falling"

And now the evidence clearly boosts the first piece. Or you could not split it at all and just ask: does a falling barometer raise the probability of rain? Yes. That's inductive support, no decomposition needed. Only Popper and Miller's particular carving guarantees the "beyond" part always gets hurt.

So why this split? Their rule: the part that "goes beyond" the evidence must share no nontrivial logical consequences with it. The "beyond" part and the evidence must have absolutely nothing in common*. The only proposition satisfying this is (h ∨ ¬e), which forces the decomposition and makes the theorem work.

Philosopher Charles Chihara argued this rule is way too strict. Consider:

  • Hypothesis: "All metals expand when heated"
  • Evidence: "This rod is copper"

Together these yield: "This copper rod will expand when heated." Neither alone tells you that. It clearly goes beyond the evidence. But under Popper and Miller's rule it doesn't count, because it shares a consequence with the evidence (both mention this copper rod). Chihara's alternative: k "goes beyond" e if e does not logically entail k.

Under this looser definition, the negative support result disappears. He published this with Donald Gillies, who had earlier defended the theorem but agreed the decomposition question needed revisiting. (Chihara & Gillies, 1990, PDF)

Ellery Eells made a related point: look at "rain OR barometer NOT falling": it welds your weather prediction to the negation of your barometric reading. That's not a clean extraction of "the part about rain that has nothing to do with barometers." It's a Frankenstein proposition the algebra created. Eells argued this assumption has been "almost uniformly rejected" in the literature. (Eells, 1988, PDF)

Criticism 1: The Decomposition is Arbitrary

The Popper-Miller theorem works by splitting any prediction h into two pieces and then showing the evidence always hurts one of them. The entire argument rises or falls on whether that split is the right one. This is the most common objection in the literature.

Say your prediction is "it will rain tomorrow" and your evidence is "the barometer is falling." They split the prediction into:

  • "Rain OR barometer falling": the part that overlaps with the evidence
  • "Rain OR barometer NOT falling": the part that "goes beyond" the evidence

The evidence trivially supports the first part. But it hurts the second: you now know the barometer IS falling, which kills the "barometer not falling" escape route, so the whole thing narrows to just "rain", a harder path than before. Popper and Miller call this second part the "inductive content," show it always gets negative support, and declare induction impossible.

But this is not the only way to carve up "it will rain." You could split it into

  • "rain AND barometer falling" OR
  • "rain AND barometer NOT falling"

And now the evidence clearly boosts the first piece. Or you could not split it at all and just ask: does a falling barometer raise the probability of rain? Yes. That's inductive support, no decomposition needed. Only Popper and Miller's particular carving guarantees the "beyond" part always gets hurt.

So why this split? Their rule: the part that "goes beyond" the evidence must share no nontrivial logical consequences with it. The "beyond" part and the evidence must have absolutely nothing in common*. The only proposition satisfying this is (h ∨ ¬e), which forces the decomposition and makes the theorem work.

Philosopher Charles Chihara argued this rule is way too strict. Consider:

  • Prediction: "All metals expand when heated"
  • Evidence: "This rod is copper"

Together these yield: "This copper rod will expand when heated." Neither alone tells you that. It clearly goes beyond the evidence. But under Popper and Miller's rule it doesn't count, because it shares a consequence with the evidence (both mention this copper rod). Chihara's alternative: k "goes beyond" e if e does not logically entail k.

Under this looser definition, the negative support result disappears. He published this with Donald Gillies, who had earlier defended the theorem but agreed the decomposition question needed revisiting. (Chihara & Gillies, 1990, PDF)

Ellery Eells made a related point: look at "rain OR barometer NOT falling": it welds your weather prediction to the negation of your barometric reading. That's not a clean extraction of "the part about rain that has nothing to do with barometers." It's a Frankenstein proposition the algebra created. Eells argued this assumption has been "almost uniformly rejected" in the literature. (Eells, 1988, PDF)

  Dirk Meulenbelt revised criticism #4306. The revision addresses ideas #4289 and #4310.

Switched the prediction for an explanation. Looks even gayer now.


The Popper-Miller Theorem

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
  2. The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.

They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
    evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

The Popper-Miller Theorem

In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)

They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)

Here's what that means concretely. Say your theory is "swans are white because the genes controlling feather pigmentation in the swan lineage produce only white melanin." This is an explanation: it tells you why swans are white, not just that they are. It also predicts that the next swan you see will be white.

You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:

  1. The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed the theory's prediction for this case.
  2. The inductive piece: "and the reason it's white is a genetic mechanism that applies to all swans, including the ones I haven't looked at." This is the actual explanation — the part that would represent learning something new about the world.

They proved mathematically that piece 2 — the explanation, the part that matters — always receives zero or negative support from the evidence. The only work the evidence ever does is confirm the prediction it directly touched. It never reaches the explanation behind it.

The Math

What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.

Step 1: Define "support."

The support that evidence e gives to hypothesis h is defined as the change in probability:

s(h|e) = p(h|e) − p(h)

If this number is positive, the evidence raised the probability of the theory. Bayesians call this "confirmation."

Step 2: Decompose the hypothesis.

Popper and Miller split h into two components:

  • The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.

  • The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence — the part that would still need to be true even if the evidence hadn't occurred.

The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).

Step 3: Calculate the support for each component.

Using standard probability rules, the support for the deductive component is:

s(h ∨ e | e) = 1 − p(h ∨ e)

This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.

The support for the inductive component is:

s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))

Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.

Step 4: The result.

The total support decomposes as:

s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)

The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the theory that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component — the explanation, the mechanism, the part that would represent genuine learning about the unobserved — is always counter-supported.

Implication

The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.

David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply Ahe theory. So you want to ask: 'The part of the theory that's not implied logically by the evidence – why does our credence for that go up?' Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")

  Benjamin Davies revised criticism #4317.

You have moved the goalposts from "isn’t it just down to other people subjectively valuing the asset you are buying" to "isn’t it just down to other people subjectively valuing the product/service the business produces” (paraphrasing).

Of course it is all subjective in a sense. My point is that you can disagree with the entire market about what an asset is worth and do just fine.

You have moved the goalposts from "isn’t it just down to other people subjectively valuing the asset you are buying" to "isn’t it just down to other people subjectively valuing the product/service the business produces” (paraphrasing).

Of course it is all subjective in a sense. My point is that you can disagree with the entire market about what an asset is worth and still have it turn out to be a good investment.

  Benjamin Davies addressed criticism #4237.

Sure, but those cash flows are still downstream of what consumers subjectively value though?

So the estimation of "intrinsic value" is ultimately a guess of what people will subjectively value in the future?

#4237·Erik Orrje, 12 days ago

You have moved the goalposts from "isn’t it just down to other people subjectively valuing the asset you are buying" to "isn’t it just down to other people subjectively valuing the product/service the business produces” (paraphrasing).

Of course it is all subjective in a sense. My point is that you can disagree with the entire market about what an asset is worth and do just fine.

  Benjamin Davies commented on idea #4262.

Another idea: letting users post ideas to their own profile. Such ideas wouldn’t be part of a discussion.

#4262·Dennis HackethalOP, 10 days ago

Cool idea!

  Dennis Hackethal commented on idea #4313.

I think you correct. It's still a testable hypothesis. How would you suggest I rename it?

#4313·Dirk MeulenbeltOP, 5 days ago

How would you suggest I rename it?

Instead of “Say your theory is "all swans are white."”, write ‘Say your prediction is "all swans are white."’

I don’t know if that replacement works for “But Popper and Miller split the theory into two pieces…” and similar parts, because those may or may not need to be about a theory rather than a prediction.

  Dennis Hackethal criticized idea #4313.

I think you correct. It's still a testable hypothesis. How would you suggest I rename it?

#4313·Dirk MeulenbeltOP, 5 days ago

It's still a testable hypothesis.

No, it’s a prediction. “A hypothesis … is a proposed explanation for a phenomenon.” https://en.wikipedia.org/wiki/Hypothesis (links and formatting removed)

Again, ‘all swans are white’ is not an explanation.

  Dirk Meulenbelt commented on idea #4312.

Yeah explanations answer ‘how’ or ‘why’ questions. Popper wrote:

In seeking pure knowledge our aim is, quite simply, to understand, to answer how-questions and why-questions. These are questions which are answered by giving an explanation. Thus all problems of pure knowledge are problems of explanation.

Karl Popper, Objective Knowledge, chapter 7

‘All swans are white’ is like saying 2 + 2 = 4. It predicts a result given a theory of addition. It does not state the theory.

More about explanations

#4312·Dennis Hackethal, 5 days ago

I think you correct. It's still a testable hypothesis. How would you suggest I rename it?