Hard to Vary or Hardly Usable?

Showing only ideas leading to #4943.

See full discussion​·​See most recent related ideas
  Log in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas.

Discussions can branch out indefinitely. You may need to scroll sideways.
Dennis Hackethal’s avatar
Only version leading to #4943 (9 total)
 This idea has an active bounty worth USD 1,300.00. Log in to participate.

My critique of David Deutsch’s The Beginning of Infinity as a programmer. In short, his ‘hard to vary’ criterion at the core of his epistemology is fatally underspecified and impossible to apply.

Deutsch says that one should adopt explanations based on how hard they are to change without impacting their ability to explain what they claim to explain. The hardest-to-change explanation is the best and should be adopted. But he doesn’t say how to figure out which is hardest to change.

A decision-making method is a computational task. He says you haven’t understood a computational task if you can’t program it. He can’t program the steps for finding out how ‘hard to vary’ an explanation is, if only because those steps are underspecified. There are too many open questions.

So by his own yardstick, he hasn’t understood his epistemology.

You will find that and many more criticisms here: https://blog.dennishackethal.com/posts/hard-to-vary-or-hardly-usable

Battle-tested
Knut Sondre Sæbø’s avatar
Only version leading to #4943 (3 total)

Have some thoughts, which might be way off. But interested in your response. It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable which is hard to detect. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.

This might give a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be turned into a function, or just a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for measuring the "hard-to-vary" criterion.

This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.

Criticized1*
Dennis Hackethal’s avatar
Only version leading to #4943 (2 total)

We can redefine ‘hard to vary’, but we’d need still a working implementation in the form of computer code.

… Demeter scores 25% and axial tilt scores 100%.

Now do this universally, for any given theory.

Criticized1*
Dennis Hackethal’s avatar

Superseded by #4942. This comment was generated automatically.

Criticism of #4940