Search Ideas
When an empty block is passed to render, it results in an empty tag '<>'
Some Reagent-like way to make things reactive using proc as first element? And then the server keeps track of which procs have been rendered, which items have changed, and re-renders that part of the template in a turbo stream?
Use frame layout for turbo frame requests? https://discuss.rubyonrails.org/t/the-right-way-to-override-render-method/84765/2
Redirects result in two additional requests, the first of which is a turbo-stream request that renders nothing, thus (presumably) prompting the browser to make another request for the same resource.
Is there a way to teach user-built helpers how to process Hiccdown? Or maybe intercepting capture already took care of this?
How Do Bounties Work?
Bounties let you invite criticism and reward high-quality contributions with real money.
Bounties are in beta. Expect things to break.
How do I participate?
Next, browse the list of bounties. Click a bounty’s dollar amount to view its page, review the bountied idea and the terms, and submit a criticism of that idea.
That’s it – you’re in.
How do I get paid?
The bounty owner reviews submissions for eligibility against his bounty terms.
To be eligible for a payout, all of the following must be true:
- Your submission is a direct criticism of the bountied idea.
- Your submission has no pending counter-criticisms by the deadline. (For temporary bounties, that’s when the review period ends; for standing bounties, it’s seven days after submission.)
- Your submission meets the bounty terms and the site-wide terms.
- You’ve connected a Stripe account in good standing before the deadline.
- You’ve not contributed funds to the bounty.
The bounty owner is never eligible to receive payouts from their own bounty.
Note that counter-criticisms are not constrained by the bounty-specific terms. Only direct criticisms of the bountied idea are.
How much will I get paid?
For temporary bounties, the amount is prorated among eligible participants based on contribution. For example, if there are ten eligible criticisms and you contributed two of them, you receive 20% of the amount when the bounty ends.
For standing bounties, amounts are assigned on a per-submission basis. For example, funders may indicate that they will pay a total of USD 100 for the first eligible submission, a total of USD 50 for the second eligible submission, and so on. Each eligible submission has its own payout date.
Fractions of cents are not paid out.
How do I run a bounty?
Click the megaphone button next to an idea (near the buttons to bookmark, archive, etc.).
Set a bounty amount and write clear terms describing the kinds of criticisms you’re willing to pay for. Then enter your credit-card details to authorize the amount plus a 5% bounty fee.
Your card is at most authorized, but not charged, when the bounty starts.
A temporary bounty typically runs for five to seven days, depending on your card’s authorization window. You may review submissions during the entire bounty period. Toward the end, a 24-hour grace period begins during which no new submissions can be made but you may continue your review. Reject any submissions that don’t meet your terms. Submissions you don’t reject are automatically accepted at the end of the review period and become eligible for payout. Your card is then charged the full authorization.
A standing bounty runs for as long as funds last. Each submission has its own seven-day review period. Again, reject any submissions that don’t meet your terms. Submissions you don’t reject are automatically accepted seven days after submission. Your card is then charged as indicated in your funding allocation.
If you reject all submissions, your card is never charged.
What’s the difference between a temporary and a standing bounty?
A temporary bounty has a fixed duration, typically between five and seven days. The bounty amount is prorated among eligible participants at the end. Standing bounties, on the other hand, don’t have a fixed duration; they run as long as funds last. Funds are paid out continuously and on a per-submission basis, as described above.
Temporary bounties are ideal when you have limited time and a smaller budget. Standing bounties are ideal for the long term with a larger budget. However, you can mix and match based on your own unique preferences and circumstances: for example, it’s possible to use a larger budget on a temporary bounty.
Can I fund an existing bounty?
Yes. Review the bounty terms. If you agree with them, click the ‘Add funding’ button on the bounty page and follow the next steps. At this point, your card is at most authorized but not charged.
Your card is charged for any submissions the bounty owner does not reject. If he rejects all submissions, your card is never charged.
Funders are never eligible to receive payouts from a bounty they funded.
Start a bounty today. Terms apply.
The Popper-Miller theorem works by splitting any prediction h into two pieces…
I wonder if your revision from hypothesis to revision was a bit sweeping.
Do they really argue predictions can be split into two pieces? That doesn’t sound right. But I could see hypotheses being split in two.
Criticism 1: The Decomposition is Arbitrary
The Popper-Miller theorem works by splitting any prediction h into two pieces and then showing the evidence always hurts one of them. The entire argument rises or falls on whether that split is the right one. This is the most common objection in the literature.
Say your prediction is "it will rain tomorrow" and your evidence is "the barometer is falling." They split the prediction into:
- "Rain OR barometer falling": the part that overlaps with the evidence
- "Rain OR barometer NOT falling": the part that "goes beyond" the evidence
The evidence trivially supports the first part. But it hurts the second: you now know the barometer IS falling, which kills the "barometer not falling" escape route, so the whole thing narrows to just "rain", a harder path than before. Popper and Miller call this second part the "inductive content," show it always gets negative support, and declare induction impossible.
But this is not the only way to carve up "it will rain." You could split it into
- "rain AND barometer falling" OR
- "rain AND barometer NOT falling"
And now the evidence clearly boosts the first piece. Or you could not split it at all and just ask: does a falling barometer raise the probability of rain? Yes. That's inductive support, no decomposition needed. Only Popper and Miller's particular carving guarantees the "beyond" part always gets hurt.
So why this split? Their rule: the part that "goes beyond" the evidence must share no nontrivial logical consequences with it. The "beyond" part and the evidence must have absolutely nothing in common*. The only proposition satisfying this is (h ∨ ¬e), which forces the decomposition and makes the theorem work.
Philosopher Charles Chihara argued this rule is way too strict. Consider:
- Prediction: "All metals expand when heated"
- Evidence: "This rod is copper"
Together these yield: "This copper rod will expand when heated." Neither alone tells you that. It clearly goes beyond the evidence. But under Popper and Miller's rule it doesn't count, because it shares a consequence with the evidence (both mention this copper rod). Chihara's alternative: k "goes beyond" e if e does not logically entail k.
Under this looser definition, the negative support result disappears. He published this with Donald Gillies, who had earlier defended the theorem but agreed the decomposition question needed revisiting. (Chihara & Gillies, 1990, PDF)
Ellery Eells made a related point: look at "rain OR barometer NOT falling": it welds your weather prediction to the negation of your barometric reading. That's not a clean extraction of "the part about rain that has nothing to do with barometers." It's a Frankenstein proposition the algebra created. Eells argued this assumption has been "almost uniformly rejected" in the literature. (Eells, 1988, PDF)
The Popper-Miller Theorem
In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)
They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)
Here's what that means concretely. Say your theory is "swans are white because the genes controlling feather pigmentation in the swan lineage produce only white melanin." This is an explanation: it tells you why swans are white, not just that they are. It also predicts that the next swan you see will be white.
You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:
- The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed the theory's prediction for this case.
- The inductive piece: "and the reason it's white is a genetic mechanism that applies to all swans, including the ones I haven't looked at." This is the actual explanation — the part that would represent learning something new about the world.
They proved mathematically that piece 2 — the explanation, the part that matters — always receives zero or negative support from the evidence. The only work the evidence ever does is confirm the prediction it directly touched. It never reaches the explanation behind it.
The Math
What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.
Step 1: Define "support."
The support that evidence e gives to hypothesis h is defined as the change in probability:
s(h|e) = p(h|e) − p(h)
If this number is positive, the evidence raised the probability of the theory. Bayesians call this "confirmation."
Step 2: Decompose the hypothesis.
Popper and Miller split h into two components:
The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.
The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence — the part that would still need to be true even if the evidence hadn't occurred.
The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).
Step 3: Calculate the support for each component.
Using standard probability rules, the support for the deductive component is:
s(h ∨ e | e) = 1 − p(h ∨ e)
This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.
The support for the inductive component is:
s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))
Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.
Step 4: The result.
The total support decomposes as:
s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)
The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the theory that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component — the explanation, the mechanism, the part that would represent genuine learning about the unobserved — is always counter-supported.
Implication
The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.
David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply Ahe theory. So you want to ask: 'The part of the theory that's not implied logically by the evidence – why does our credence for that go up?' Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")
You have moved the goalposts from "isn’t it just down to other people subjectively valuing the asset you are buying" to "isn’t it just down to other people subjectively valuing the product/service the business produces” (paraphrasing).
Of course it is all subjective in a sense. My point is that you can disagree with the entire market about what an asset is worth and still have it turn out to be a good investment.
You have moved the goalposts from "isn’t it just down to other people subjectively valuing the asset you are buying" to "isn’t it just down to other people subjectively valuing the product/service the business produces” (paraphrasing).
Of course it is all subjective in a sense. My point is that you can disagree with the entire market about what an asset is worth and do just fine.
How would you suggest I rename it?
Instead of “Say your theory is "all swans are white."”, write ‘Say your prediction is "all swans are white."’
I don’t know if that replacement works for “But Popper and Miller split the theory into two pieces…” and similar parts, because those may or may not need to be about a theory rather than a prediction.
It's still a testable hypothesis.
No, it’s a prediction. “A hypothesis … is a proposed explanation for a phenomenon.” https://en.wikipedia.org/wiki/Hypothesis (links and formatting removed)
Again, ‘all swans are white’ is not an explanation.
I think you correct. It's still a testable hypothesis. How would you suggest I rename it?
Yeah explanations answer ‘how’ or ‘why’ questions. Popper wrote:
In seeking pure knowledge our aim is, quite simply, to understand, to answer how-questions and why-questions. These are questions which are answered by giving an explanation. Thus all problems of pure knowledge are problems of explanation.
‘All swans are white’ is like saying 2 + 2 = 4. It predicts a result given a theory of addition. It does not state the theory.
Say your theory is "all swans are white."
That doesn’t sound like a theory. It sounds like a prediction/statement.
The Conjunction Problem
Deutsch also offers an argument against Bayesian epistemology: take quantum mechanics and general relativity, our two best physics theories. They contradict each other.
- T₁ = quantum mechanics
- T₂ = general relativity
Both are spectacularly successful. A Bayesian should assign high credence to each. But T₁ and T₂ contradict each other, and probability theory is absolute about contradictions:
p(T₁ ∧ T₂) = 0
Zero. The combined understanding that lets us build GPS satellites, which need both relativity for orbital corrections and quantum mechanics for atomic clocks is worth literally nothing under the probability calculus.
Meanwhile, the negation ¬T₁ ("quantum mechanics is false") tells you nothing about the world. It's the infinite set of every possible alternative, mostly nonsensical. Yet the probability calculus ranks it higher than the theory that lets us build lasers and transistors.
A framework that assigns zero value to our best knowledge is, Deutsch argues, not capturing what knowledge actually is. Instead: "What science really seeks to ‘maximise’ (or rather, create) is explanatory power." (Deutsch, "Simple refutation of the 'Bayesian' philosophy of science," 2014)
The Popper-Miller Theorem
In 1983, Karl Popper and David Miller published a paper in Nature titled "A proof of the impossibility of inductive probability" that used Bayesian math to prove something uncomfortable: the part of a theory that goes beyond the evidence never gets supported by that evidence. It actually gets negative support. In their words: "probabilistic support in the sense of the calculus of probability can never be inductive support." (Popper & Miller, 1983)
They expanded on this in a second paper: "although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence." (Popper & Miller, 1987)
Here's what that means concretely. Say your theory is "all swans are white." You see a white swan. Your overall confidence in the theory goes up. But Popper and Miller split the theory into two pieces:
- The deductive piece: "this particular swan I'm looking at is white." The evidence directly confirmed that.
- The inductive piece: "and all the other swans I haven't looked at are also white." This is the part that would actually represent learning something new about the world.
They proved mathematically that piece 2, the inductive piece, the part that matters always receives zero or negative support from the evidence. The only work the evidence ever does is confirm what it directly touched. It never reaches beyond itself.
The Math
What follows is a simplified sketch of the proof. For the full formal treatment, see the original paper.
Step 1: Define "support."
The support that evidence e gives to hypothesis h is defined as the change in probability:
s(h|e) = p(h|e) − p(h)
If this number is positive, the evidence raised the probability of the hypothesis. Bayesians call this "confirmation."
Step 2: Decompose the hypothesis.
Popper and Miller split h into two components:
The deductive component: (h ∨ e), meaning "h or e." This is the part of h that is logically connected to the evidence. If e is true, then (h ∨ e) is automatically true, so evidence trivially supports it.
The inductive component: (h ∨ ¬e), meaning "h or not-e." This is the part of h that goes beyond the evidence, the part that would still need to be true even if the
evidence hadn't occurred.
The hypothesis h is logically equivalent to the conjunction of these two components: h ⟺ (h ∨ e) ∧ (h ∨ ¬e).
Step 3: Calculate the support for each component.
Using standard probability rules, the support for the deductive component is:
s(h ∨ e | e) = 1 − p(h ∨ e)
This is always ≥ 0, since p(h ∨ e) ≤ 1. The evidence always supports the deductive part. No surprise, the evidence is logically contained in it.
The support for the inductive component is:
s(h ∨ ¬e | e) = −(1 − p(e))(1 − p(h|e))
Both (1 − p(e)) and (1 − p(h|e)) are ≥ 0 (assuming we're not dealing with certainties), so their product is ≥ 0, and the negative sign means the whole expression is always ≤ 0.
Step 4: The result.
The total support decomposes as:
s(h|e) = s(h ∨ e | e) + s(h ∨ ¬e | e)
The first term (deductive) is always non-negative. The second term (inductive) is always non-positive. The evidence never positively supports the part of the hypothesis that goes beyond the evidence. Whatever "boost" h gets from e is entirely accounted for by the deductive connection between them. The inductive component, the part that would represent genuine learning about the unobserved, is always counter-supported.
Implication
The implication is devastating for Bayesian epistemology: the entire framework of "updating beliefs with evidence" is an illusion. The number goes up, but the going-up is entirely accounted for by deduction. There is no induction hiding inside Bayes' theorem. The Bayesians' own math proves it.
David Deutsch, who has been working with colleague Matjaž Leonardis on a more accessible presentation of the theorem (Deutsch on X/Twitter, 2020), puts it this way: "There's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down." (Joseph Walker Podcast, Ep. 139, "Against Bayesianism")
Bayesian epistemology never said contradictory theories are useful together. It says they can't both be true simultaneously, and they can't. That's why physicists are looking for a unified theory. p(T₁ ∧ T₂) = 0 is the correct answer. It would be a bug if it were anything else.
QM + GR together represent more knowledge about reality than either one alone, yet that is not reflected in Bayesian epistemology. Bayesian epistemology misses the point of science: improving our explanations.