Search Ideas
2199 ideas match your query.:
Deutsch leaves open whether ‘difficulty to vary’ is a relative scale or an absolute one.
Do I need at least two explanations to know whether one is harder to vary than the other? Or can I tell, with only a single explanation, how hard it is to vary on its own?
Choosing between explanations “according to how good they are” is vague. If I have three explanations, A, B, and C, and A is better than B is better than C, does that mean I adopt only A and reject both B and C? I assume so, but I don’t think Deutsch ever says anywhere.
The quoted statement is also compatible with adopting A with strong conviction, B with medium conviction (as a backup or something), and only slightly adopting C (if it’s still good, just not as good as the others) or rejecting C slightly (if it’s a little bad) or rejecting it very strongly (if it’s really bad).
From my article:
[T]he assignment of positive values enables self-coercion: if I have a ‘good’ explanation worth 500 points, and a criticism worth only 100 points, Deutsch’s epistemology (presumably) says to adopt the explanation even though it has a pending criticism. After all, we’re still 400 in the black! But according to the epistemology of Taking Children Seriously, a parenting philosophy Deutsch cofounded before writing The Beginning of Infinity, acting on an idea that has pending criticisms is the definition of self-coercion. Such an act is irrational and incompatible with his view that rationality is fun in the sense that rationality means unanimous consent between explicit, inexplicit, unconscious, and any other type of idea in one’s mind.
In short, does the search for good explanations enable self-coercion and contradict TCS?
Our explanations do get better the more criticisms we address, but Deutsch has it backwards: the increasing quality of an explanation is the result of critical activity, not its means.
From my article:
Isn’t the assignment of positive scores, of positive reasons to prefer one theory over another, a kind of justificationism? Deutsch criticizes justificationism throughout The Beginning of Infinity, but isn’t an endorsement of a theory as ‘good’ a kind of justification?
From my article:
[I]sn’t the difficulty of changing an explanation at least partly a property not of the explanation itself but of whoever is trying to change it? If I’m having difficulty changing it, maybe that’s because I lack imagination. Or maybe I’m just new to that field and an expert could easily change it. In which case the difficulty of changing an explanation is, again, not an objective property of that explanation but a subjective property of its critics. How could subjective properties be epistemologically fundamental?
From my article:
[D]epending on context, being hard to change can be a bad thing. For example, ‘tight coupling’ is a reason software can be hard to change, and it’s considered bad because it reduces maintainability.
Deutsch says rationality means seeking good explanations, so without a step-by-step guide on how to seek good explanations, we cannot know when we are being irrational. That’s bad for error correction.
Popper formalized much of his epistemology, such as the notions of empirical content and degrees of falsifiability. Why hold Deutsch to a different standard? Why couldn’t he formalize the steps for finding the quality of a given explanation?
No, it’s asking for a formalization of rational decision-making, which is a related but separate issue. Given a set of explanations (after they’ve already been created), what non-creative sorting algorithm do we use to find the best one?
Deutsch says to choose between explanations “according to how good they are” – note the plural.
What if I can only come up with one explanation? Can I just go with that one? What if it’s bad but still the best I could do? He leaves such questions open.
Deutsch contradicts his yardstick for understanding a computational task. He says that you haven’t understood a computational task if you can’t program it. His method of decision-making based on finding good explanations is a computational task. He can’t program it, so he hasn’t understood it.
Isn't every theory infinitely underspecified ?
This stance is presumably a version of the epistemological cynicism I identify here.
Deutsch leaves open how we find out how hard to vary an explanation is. We need more details. In some cases it’s obvious, but we need a general description for less-obvious cases.
Thanks for asking good questions.
Is it accurate to view reason more as a process than a static state?
Yes.
Where the process might be summed up by
1. Being open to criticism
2. Truth-seeking (commitment to getting ideas to jibe)
Yes. Aka ‘common-preference finding’ aka ‘fun’.
Some of the virtues that @benjamin-davies* has put together are part of it, too.
Maybe I don’t understand the question, but I don’t think there’s a one-size-fits-all criterion to use for that scenario. It depends on the content of the ideas and how they conflict exactly.
All I can say without more info is that we can try to criticize ideas and adopt the ones with no pending criticisms. That’s true for any kind of idea – explicit, inexplicit, conscious, unconscious, executable, etc. See #2281.
Yeah. I mean finding unanimous consent between different kinds of ideas generally, not just between ideas about rationality. See also #3049 and #2281.
[A]ny system only ever has input, output, and functions that determine how that output is generated. What else is there?
Minds don’t necessarily output anything. Also, they don’t just run existing functions, they create new ones.
Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?
Parochially. Culture has more impact.
Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas?
Because it would be a product of our culture and speak English.
But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium…
Yes. This tells you that people aren’t just agents. They are agents in the sense that they exist in some environment they can interact with and move around in. But they’re so much more than that.
It’s a bit like saying humans are mammals. They are, but that’s not their distinguishing characteristic, so we can’t study mammals to learn about people.
I wouldn’t bother with cog sci or any ‘agentic’ notion of people. Focus on Popperian epistemology instead. It’s the only promising route we have.
…a bacterium … also has problems that shape its actions, what it finds relevant, etc…
A bacterium has ‘problems’ in some sense but it cannot create new knowledge to solve them. It may be more accurate to say that its genes have problems.
[T]he framework emerged out of biology trying to make a theory of organisms in general…
That doesn’t mean static memes couldn’t have co-opted the framework to undermine man and his mind.
The only real change I seem to have is in every conscious moment.
I don’t know what it means to ‘have change’, but note that even unconscious ideas evolve in our minds all the time. So those change as well, if that’s what you mean.