Search Ideas
1824 ideas match your query.:
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.
None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.
I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.
Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.
Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.
If that’s too long, watch ‘The Simplest Thing in the World’
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
That was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..
Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it.
Agreed. There’s more to it than meets the eye. For example, maybe capitalism can be thought of as society-wide common-preference finding (#3013). Rationality might work the same way across minds as it does within a single mind. Capitalism as an expression of rationality in society.
As for virtues, I think some virtues are more fundamental than others. There are some virtues I think people should adopt. Like, rationality depends on them. But the core functionality of the mind as a whole does not. There’s a difference between creativity and rationality. Which virtues someone adopts and why and how they prioritize them in different situations is downstream of creativity as a whole.
I don’t know if activating higher virtues always resolves conflicts between ideas. But it could put them on hold for a while, yeah. If I see a venomous snake, my main priority is to get to safety (life as the ultimate value, as objectivists would say).
Just referring here to alters as the clinical word for 'the other dissociated personalities
It seems more plausible to me that associative identity disorder actually is more like the division of a mind. They often recall meeting each other in dreams (seeing the other alters from their local perspective within the dream). So it seems that the split goes further, and actually gives rise to different experiences within a mind. They live and experience from different perspectives, and start communicating with each other more like distinct minds. In split-brain patients, the left and right hemispheres can disagree on what clothing to wear in the morning, and physically fight over wearing a tie or not.
Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
Yeah, I think so.
If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
For example, you could observe that you’re feeling sad even though only good things have been happening to you. So the sadness doesn’t make sense (at least on the surface). And then you can introspect from there.
Interesting! Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it. Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
mye
How does this happen? (Not a metaphorical question.)
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
Aah, then I agree. I thought you meant AGI would develop the same metaphors independently.
Deutsch leaves open whether ‘difficulty to vary’ is a relative scale or an absolute one.
Do I need at least two explanations to know whether one is harder to vary than the other? Or can I tell, with only a single explanation, how hard it is to vary on its own?
Choosing between explanations “according to how good they are” is vague. If I have three explanations, A, B, and C, and A is better than B is better than C, does that mean I adopt only A and reject both B and C? I assume so, but I don’t think Deutsch ever says anywhere.
The quoted statement is also compatible with adopting A with strong conviction, B with medium conviction (as a backup or something), and only slightly adopting C (if it’s still good, just not as good as the others) or rejecting C slightly (if it’s a little bad) or rejecting it very strongly (if it’s really bad).
From my article:
[T]he assignment of positive values enables self-coercion: if I have a ‘good’ explanation worth 500 points, and a criticism worth only 100 points, Deutsch’s epistemology (presumably) says to adopt the explanation even though it has a pending criticism. After all, we’re still 400 in the black! But according to the epistemology of Taking Children Seriously, a parenting philosophy Deutsch cofounded before writing The Beginning of Infinity, acting on an idea that has pending criticisms is the definition of self-coercion. Such an act is irrational and incompatible with his view that rationality is fun in the sense that rationality means unanimous consent between explicit, inexplicit, unconscious, and any other type of idea in one’s mind.
In short, does the search for good explanations enable self-coercion and contradict TCS?
Our explanations do get better the more criticisms we address, but Deutsch has it backwards: the increasing quality of an explanation is the result of critical activity, not its means.
From my article:
Isn’t the assignment of positive scores, of positive reasons to prefer one theory over another, a kind of justificationism? Deutsch criticizes justificationism throughout The Beginning of Infinity, but isn’t an endorsement of a theory as ‘good’ a kind of justification?
From my article:
[I]sn’t the difficulty of changing an explanation at least partly a property not of the explanation itself but of whoever is trying to change it? If I’m having difficulty changing it, maybe that’s because I lack imagination. Or maybe I’m just new to that field and an expert could easily change it. In which case the difficulty of changing an explanation is, again, not an objective property of that explanation but a subjective property of its critics. How could subjective properties be epistemologically fundamental?
From my article:
[D]epending on context, being hard to change can be a bad thing. For example, ‘tight coupling’ is a reason software can be hard to change, and it’s considered bad because it reduces maintainability.
Deutsch says rationality means seeking good explanations, so without a step-by-step guide on how to seek good explanations, we cannot know when we are being irrational. That’s bad for error correction.
Popper formalized much of his epistemology, such as the notions of empirical content and degrees of falsifiability. Why hold Deutsch to a different standard? Why couldn’t he formalize the steps for finding the quality of a given explanation?
No, it’s asking for a formalization of rational decision-making, which is a related but separate issue. Given a set of explanations (after they’ve already been created), what non-creative sorting algorithm do we use to find the best one?