Search Ideas
2048 ideas match your query.:
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
2) Skepticism is too different from fallibilism to consider it a continuation.
I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.
One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.
If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.
(This is not financial advice – follow at your own risk.)
You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.
You describe your job as “excruciating”. That’s reason to quit.
No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.
None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.
I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.
Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.
Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.
If that’s too long, watch ‘The Simplest Thing in the World’
Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it.
Agreed. There’s more to it than meets the eye. For example, maybe capitalism can be thought of as society-wide common-preference finding (#3013). Rationality might work the same way across minds as it does within a single mind. Capitalism as an expression of rationality in society.
As for virtues, I think some virtues are more fundamental than others. There are some virtues I think people should adopt. Like, rationality depends on them. But the core functionality of the mind as a whole does not. There’s a difference between creativity and rationality. Which virtues someone adopts and why and how they prioritize them in different situations is downstream of creativity as a whole.
I don’t know if activating higher virtues always resolves conflicts between ideas. But it could put them on hold for a while, yeah. If I see a venomous snake, my main priority is to get to safety (life as the ultimate value, as objectivists would say).
Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
Yeah, I think so.
If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
For example, you could observe that you’re feeling sad even though only good things have been happening to you. So the sadness doesn’t make sense (at least on the surface). And then you can introspect from there.
mye
How does this happen? (Not a metaphorical question.)
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
But an AGI might not develop such phrases independently. (See #3730.)
Deutsch leaves open whether ‘difficulty to vary’ is a relative scale or an absolute one.
Do I need at least two explanations to know whether one is harder to vary than the other? Or can I tell, with only a single explanation, how hard it is to vary on its own?
Choosing between explanations “according to how good they are” is vague. If I have three explanations, A, B, and C, and A is better than B is better than C, does that mean I adopt only A and reject both B and C? I assume so, but I don’t think Deutsch ever says anywhere.
The quoted statement is also compatible with adopting A with strong conviction, B with medium conviction (as a backup or something), and only slightly adopting C (if it’s still good, just not as good as the others) or rejecting C slightly (if it’s a little bad) or rejecting it very strongly (if it’s really bad).
Deutsch’s stance in my own words:
The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs bad explanations is epistemologically fundamental.
A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.
For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.
The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chatper 9; see similar remark in chapter 8.)
From my article:
[T]he assignment of positive values enables self-coercion: if I have a ‘good’ explanation worth 500 points, and a criticism worth only 100 points, Deutsch’s epistemology (presumably) says to adopt the explanation even though it has a pending criticism. After all, we’re still 400 in the black! But according to the epistemology of Taking Children Seriously, a parenting philosophy Deutsch cofounded before writing The Beginning of Infinity, acting on an idea that has pending criticisms is the definition of self-coercion. Such an act is irrational and incompatible with his view that rationality is fun in the sense that rationality means unanimous consent between explicit, inexplicit, unconscious, and any other type of idea in one’s mind.
In short, does the search for good explanations enable self-coercion and contradict TCS?
Our explanations do get better the more criticisms we address, but Deutsch has it backwards: the increasing quality of an explanation is the result of critical activity, not its means.