Search Ideas
2637 ideas match your query.:
mye
How does this happen? (Not a metaphorical question.)
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
But an AGI might not develop such phrases independently. (See #3730.)
One part of mye question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
Aah, then I agree. I thought you meant AGI would develop the same metaphors independently.
Deutsch leaves open whether ‘difficulty to vary’ is a relative scale or an absolute one.
Do I need at least two explanations to know whether one is harder to vary than the other? Or can I tell, with only a single explanation, how hard it is to vary on its own?
Choosing between explanations “according to how good they are” is vague. If I have three explanations, A, B, and C, and A is better than B is better than C, does that mean I adopt only A and reject both B and C? I assume so, but I don’t think Deutsch ever says anywhere.
The quoted statement is also compatible with adopting A with strong conviction, B with medium conviction (as a backup or something), and only slightly adopting C (if it’s still good, just not as good as the others) or rejecting C slightly (if it’s a little bad) or rejecting it very strongly (if it’s really bad).
Deutsch’s stance in my own words:
The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs bad explanations is epistemologically fundamental.
A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.
For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.
The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chatper 9; see similar remark in chapter 8.)
From my article:
[T]he assignment of positive values enables self-coercion: if I have a ‘good’ explanation worth 500 points, and a criticism worth only 100 points, Deutsch’s epistemology (presumably) says to adopt the explanation even though it has a pending criticism. After all, we’re still 400 in the black! But according to the epistemology of Taking Children Seriously, a parenting philosophy Deutsch cofounded before writing The Beginning of Infinity, acting on an idea that has pending criticisms is the definition of self-coercion. Such an act is irrational and incompatible with his view that rationality is fun in the sense that rationality means unanimous consent between explicit, inexplicit, unconscious, and any other type of idea in one’s mind.
In short, does the search for good explanations enable self-coercion and contradict TCS?
Our explanations do get better the more criticisms we address, but Deutsch has it backwards: the increasing quality of an explanation is the result of critical activity, not its means.
From my article:
[T]he assignment of positive values enables self-coercion: if I have a ‘good’ explanation worth 500 points, and a criticism worth only 100 points, Deutsch’s epistemology (presumably) says to adopt the explanation even though it has a pending criticism. After all, we’re still 400 in the black! But according to the epistemology of Taking Children Seriously, a parenting philosophy Deutsch cofounded before writing The Beginning of Infinity, acting on an idea that has pending criticisms is the definition of self-coercion. Such an act is irrational and incompatible with his view that rationality is fun in the sense that rationality means unanimous consent between explicit, inexplicit, unconscious, and any other type of idea in one’s mind.
In short does the search for good explanations enable self-coercion and contradict TCS?
From my article:
Isn’t the assignment of positive scores, of positive reasons to prefer one theory over another, a kind of justificationism? Deutsch criticizes justificationism throughout The Beginning of Infinity, but isn’t an endorsement of a theory as ‘good’ a kind of justification?
From my article:
[I]sn’t the difficulty of changing an explanation at least partly a property not of the explanation itself but of whoever is trying to change it? If I’m having difficulty changing it, maybe that’s because I lack imagination. Or maybe I’m just new to that field and an expert could easily change it. In which case the difficulty of changing an explanation is, again, not an objective property of that explanation but a subjective property of its critics. How could subjective properties be epistemologically fundamental?
From my article:
[D]epending on context, being hard to change can be a bad thing. For example, ‘tight coupling’ is a reason software can be hard to change, and it’s considered bad because it reduces maintainability.
Deutsch’s stance in my own words:
The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs explanations is epistemologically fundamental.
A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.
For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.
The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chatper 9; see similar remark in chapter 8.)
From my article:
[I]sn’t the difficulty of changing an explanation at least partly a property not of the explanation itself but of whoever is trying to change it? If I’m having difficulty changing it, maybe that’s because I lack imagination. Or maybe I’m just new to that field and an expert could easily change it. In which case the difficulty of changing an explanation is, again, not an objective property of that explanation but a subjective property of its critics. How could subjective properties be epistemologically fundamental?
Deutsch says rationality means seeking good explanations, so without a step-by-step guide on how to seek good explanations, we cannot know when we are being irrational. That’s bad for error correction.
Popper formalized much of his epistemology, such as the notions of empirical content and degrees of falsifiability. Why hold Deutsch to a different standard? Why couldn’t he formalize the steps for finding the quality of a given explanation?
No, it’s asking for a formalization of rational decision-making, which is a related but separate issue. Given a set of explanations (after they’ve already been created), what non-creative sorting algorithm do we use to find the best one?
Isn’t this asking for a formalization of creativity, which is impossible?
No, see #3706. I’m open to user input (within reason). That covers any creative parts. The non-creative parts can be automated by definition.
Isn’t this basically asking for a specification of the creative program? Isn’t this effectively an AGI project?
Deutsch says to choose between explanations “according to how good they are” – note the plural.
What if I can only come up with one explanation? Can I just go with that one? What if it’s bad but still the best I could do? He leaves such questions open.
Deutsch contradicts his yardstick for understanding a computational task. He says that you haven’t understood a computational task if you can’t program it. His method of decision-making based on finding good explanations is a computational task. He can’t program it, so he hasn’t understood it.
Even if we allow creative user input, eg a score for the quality of an explanation, we run into all kinds of open questions, such as what upper and lower limits to use for the score, and unexpected behavior, such as criticisms pushing an explanations score beyond those limits.