Search Ideas
2704 ideas match your query.:
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.
None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.
I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.
Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.
Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.
If that’s too long, watch ‘The Simplest Thing in the World’
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
That was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..
Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it.
Agreed. There’s more to it than meets the eye. For example, maybe capitalism can be thought of as society-wide common-preference finding (#3013). Rationality might work the same way across minds as it does within a single mind. Capitalism as an expression of rationality in society.
As for virtues, I think some virtues are more fundamental than others. There are some virtues I think people should adopt. Like, rationality depends on them. But the core functionality of the mind as a whole does not. There’s a difference between creativity and rationality. Which virtues someone adopts and why and how they prioritize them in different situations is downstream of creativity as a whole.
I don’t know if activating higher virtues always resolves conflicts between ideas. But it could put them on hold for a while, yeah. If I see a venomous snake, my main priority is to get to safety (life as the ultimate value, as objectivists would say).
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
Just referring here to alters as the clinical word for 'the other dissociated personalities
It seems more plausible to me that associative identity disorder actually is more like the division of a mind. They often recall meeting each other in dreams (seeing the other alters from their local perspective within the dream). So it seems that the split goes further, and actually gives rise to different experiences within a mind. They live and experience from different perspectives, and start communicating with each other more like distinct minds. In split-brain patients, the left and right hemispheres can disagree on what clothing to wear in the morning, and physically fight over wearing a tie or not.
Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
Yeah, I think so.
If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
For example, you could observe that you’re feeling sad even though only good things have been happening to you. So the sadness doesn’t make sense (at least on the surface). And then you can introspect from there.
Interesting! Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it. Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
mye
How does this happen? (Not a metaphorical question.)
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
But an AGI might not develop such phrases independently. (See #3730.)
One part of mye question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
Aah, then I agree. I thought you meant AGI would develop the same metaphors independently.
Deutsch leaves open whether ‘difficulty to vary’ is a relative scale or an absolute one.
Do I need at least two explanations to know whether one is harder to vary than the other? Or can I tell, with only a single explanation, how hard it is to vary on its own?
Choosing between explanations “according to how good they are” is vague. If I have three explanations, A, B, and C, and A is better than B is better than C, does that mean I adopt only A and reject both B and C? I assume so, but I don’t think Deutsch ever says anywhere.
The quoted statement is also compatible with adopting A with strong conviction, B with medium conviction (as a backup or something), and only slightly adopting C (if it’s still good, just not as good as the others) or rejecting C slightly (if it’s a little bad) or rejecting it very strongly (if it’s really bad).
Deutsch’s stance in my own words:
The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs bad explanations is epistemologically fundamental.
A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.
For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.
The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chatper 9; see similar remark in chapter 8.)