“Can you live your life 100% guided by reason?”
#4257·Knut Sondre Sæbø revised 5 days agoThose are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know. Ideas have a similar limitation
I'm not saying we can't extend our ideas through imagination, creativity etc.
That’s what you were originally saying in #3626. That’s what the claim “Living according to reason and rationality alone is impossible” amounts to.
#4254·Knut Sondre Sæbø, 5 days agoWe explain the world by postulating invisible things, but we can only understand those abstractions through concrete metaphors rooted in our physical experience. A concept or idea with no experiential grounding is meaningless.
A concept or idea with no experiential grounding is meaningless.
Maybe, but that’s different from confusing a parochial factor for a fundamental one.
#4254·Knut Sondre Sæbø, 5 days agoWe explain the world by postulating invisible things, but we can only understand those abstractions through concrete metaphors rooted in our physical experience. A concept or idea with no experiential grounding is meaningless.
Not all explanations use metaphors.
Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know.
Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know. Ideas have a similar limitation
Those are just spacial metaphors though. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. Can you think of any ideas that isn't rooted in an experiential perspective?
Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know.
#3769·Dennis HackethalOP, about 1 month agoHumans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
Those are just spacial metaphors though. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. Can you think of any ideas that isn't rooted in an experiential perspective?
#3768·Dennis HackethalOP, about 1 month agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
We explain the world by postulating invisible things, but we can only understand those abstractions through concrete metaphors rooted in our physical experience. A concept or idea with no experiential grounding is meaningless.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
#3743·Knut Sondre Sæbø, about 1 month agoThat was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
#3654·Knut Sondre Sæbø revised about 1 month agoThis is also borrowed from cognitive science. But what I meant was to point to the fact that there are “pre-conceptual” models, desires, attentional salience etc. that impinge on and filter input to conscious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to “move around” in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we’re close to the truth, we understand, we grasp a concept, we arrive at a conclusion.
Humans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
#3755·Knut Sondre Sæbø revised about 1 month agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
#3755·Knut Sondre Sæbø revised about 1 month agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3752·Knut Sondre Sæbø revised about 1 month agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3733·Dennis HackethalOP, about 1 month agoOr it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
#3734·Dennis HackethalOP, about 1 month agomye
How does this happen? (Not a metaphorical question.)
That was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..
#3736·Knut Sondre Sæbø revised about 1 month agoInteresting! Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it. Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it.
Agreed. There’s more to it than meets the eye. For example, maybe capitalism can be thought of as society-wide common-preference finding (#3013). Rationality might work the same way across minds as it does within a single mind. Capitalism as an expression of rationality in society.
As for virtues, I think some virtues are more fundamental than others. There are some virtues I think people should adopt. Like, rationality depends on them. But the core functionality of the mind as a whole does not. There’s a difference between creativity and rationality. Which virtues someone adopts and why and how they prioritize them in different situations is downstream of creativity as a whole.
I don’t know if activating higher virtues always resolves conflicts between ideas. But it could put them on hold for a while, yeah. If I see a venomous snake, my main priority is to get to safety (life as the ultimate value, as objectivists would say).
One part of mye question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
#3731·Knut Sondre Sæbø, about 1 month agoOne part of mye question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
Yeah, I think so.
If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
For example, you could observe that you’re feeling sad even though only good things have been happening to you. So the sadness doesn’t make sense (at least on the surface). And then you can introspect from there.
Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
Interesting! Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it. Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
#3699·Dennis HackethalOP revised about 1 month agoThanks for asking good questions.
Is it accurate to view reason more as a process than a static state?
Yes.
Where the process might be summed up by
1. Being open to criticism
2. Truth-seeking (commitment to getting ideas to jibe)Yes. Aka ‘common-preference finding’ aka ‘fun’.
Some of the virtues that @benjamin-davies has put together are part of it, too.
Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
#3731·Knut Sondre Sæbø, about 1 month agoOne part of mye question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
mye
How does this happen? (Not a metaphorical question.)
#3732·Dennis HackethalOP, about 1 month agoBut an AGI might not develop such phrases independently. (See #3730.)
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.