“Can you live your life 100% guided by reason?”
Showing only #3617 and its comments.
See full discussionLog in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas.Living according to reason and rationality alone is impossible, because propositional knowledge is only a subset of needed knowledge for an embodied agent (the others being procedural, participatory- and perspectival knowledge)
perspectively knowledge
I’m not sure that’s what you meant to write. Adverbs don’t go in front of nouns. Maybe something about perception?
Fixed it. I meant to write perspectival knowledge, whcih is a term used in cognitive science.
Okay. When your revision addresses a criticism, remember to uncheck each version of the criticism underneath the revision form. Try revising the idea again and uncheck the criticisms you’ve addressed. Otherwise, your ideas look more problematic than they are.
Calling people “embodied agent[s]” like they’re barely superior to video-game characters is dehumanizing and weird.
This is also borrowed from cognitive science. But what I meant was to point to the fact that there are “pre-conceptual” models, desires, attentional salience etc. that impinge on and filter input to conscious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to “move around” in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we’re close to the truth, we understand, we grasp a concept, we arrive at a conclusion.
An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to "move around" in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we're close to the truth, we under-stand, we grasp a concept, we arrive at a conclusion.
That has nothing to do with brain regions. An AGI running on a laptop would use the same phrases.
Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas? Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?
Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas?
Because it would be a product of our culture and speak English.
Aah, then I agree. I thought you meant AGI would develop the same metaphors independently.
Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?
Parochially. Culture has more impact.
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
This is also borrowed from cognitive science.
Yeah, the cog-sci guys don’t understand Popper or epistemology generally. They seem to view minds and brains as input/output machines. But that isn’t how that works.
I think that's pretty accurate. But if you believe reality simply works by executing a formal set of fundamental rules, how can you believe anything else? By this model, any system only ever has input, output, and functions that determine how that output is generated. What else is there?
If strong emergence exist, there can "emerge" other things that have downward causation.
What is the evidence for strong emergence as opposed to just vieweing every phenomena as the processing of fundamental laws?
Even a non-living system, can build up constraints at an aggregate which have downwards causation. After a Crystal is formed the lattice constrains which vibrational modes are possible for individual atoms. In other words being part of a larger strucutre (which follows other rules) has downard causation on "parts" following fundamental rules. There might be other emergent structures that expose other fundamental rules not encompassed by the known fundamental rules.
[A]ny system only ever has input, output, and functions that determine how that output is generated. What else is there?
Minds don’t necessarily output anything. Also, they don’t just run existing functions, they create new ones.
I don’t think any of this addresses my original criticism that calling people “embodied agent[s]” is dehumanizing. It sounds like we’re studying rats. So what if cog-sci is dehumanizing? That doesn’t make it better.
Haven't thought about it like that. The purpose of speaking of an embodied agent is to generalize cognition. To understand what's relevant to an agent, you need to understand how that agent is embodied in the world.
Again, to me, that’s how programmers think about their video-game characters, and how researchers think about lab rats in mazes. I would avoid talking about people as ‘agents’ and instead treat them as human beings.
To understand what’s relevant to a person, you need to understand their problem situation.
I think I agree. But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium (which also has problems that shape its actions, what it finds relevant, etc.). But I agree that persons and agents should be differentiated, since people exceed the pre-given problems set by evolution.
…a bacterium … also has problems that shape its actions, what it finds relevant, etc…
A bacterium has ‘problems’ in some sense but it cannot create new knowledge to solve them. It may be more accurate to say that its genes have problems.
But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium…
Yes. This tells you that people aren’t just agents. They are agents in the sense that they exist in some environment they can interact with and move around in. But they’re so much more than that.
It’s a bit like saying humans are mammals. They are, but that’s not their distinguishing characteristic, so we can’t study mammals to learn about people.
I wouldn’t bother with cog sci or any ‘agentic’ notion of people. Focus on Popperian epistemology instead. It’s the only promising route we have.
The purpose of speaking of an embodied agent is to generalize cognition.
It’s possible that the actual purpose of such language is more sinister than that, having to do with static memes: to continue the age-old mystical tradition of portraying man as a pathetic, helpless being at the mercy of a universe he cannot understand or control.
But I’m purely speculating here and would have to think more about it. So I’m not marking this as a criticism (yet).
I don’t think so, but I don’t know enough of the history. But the framework emerged out of biology trying to make a theory of organisms in general (innate theories like autopoiesis/self-preservation, for example). Then it’s been used specifically in cognitive science to try and integrate the general framework with human cognition. Even though it is dehumanizing, there is some value to viewing at least parts of human cognition in these terms. Whatever creativity is, most of human experience is already pre-given moment to moment, not willed by the person. I don’t think we as people derive our sense of autonomy from this world construction and pre-given coupling (we receive automatic responses/affordances). The only real change I seem to have is in every conscious moment.
Whatever creativity is, most of human experience is already pre-given moment to moment, not willed by the person.
I think what really happens is this: when we’re young, we guess theories about how to experience the world, and then we correct errors in those theories and practice them to the point they become completely automated. Much of this happens in childhood. As adults, we don’t remember doing it. So then experience seems ‘given’.
The only real change I seem to have is in every conscious moment.
I don’t know what it means to ‘have change’, but note that even unconscious ideas evolve in our minds all the time. So those change as well, if that’s what you mean.
[T]he framework emerged out of biology trying to make a theory of organisms in general…
That doesn’t mean static memes couldn’t have co-opted the framework to undermine man and his mind.
It sounds like you have a different conception of knowledge and rationality from Popper’s/Deutsch’s.
There’s a unity of knowledge. Knowledge isn’t fragmented the way you suggest. Rationality means finding common preferences among ideas, ie making different types of ideas jibe. Why should that not be possible for the types of knowledge you mention?
Even if knowledge is unified at some fundamental level, we might not be able to live by means of this unified knowledge alone (because of how we function or pure complexity). Living life might require operating through other «kinds» of knowledge which are pre- cognitive. You cannot ride a bike or maintain a relationship by thinking through quantum mechanical or propositional theories to word.
You cannot ride a bike or maintain a relationship by thinking through quantum mechanical or propositional theories to word.
That isn’t what I mean by unity of knowledge. Of course we can’t process our knowledge in its totality at once. That’s necessarily piecemeal. But that doesn’t mean we can’t live a life guided by reason.
If you consider riding a bike an example of irrationality, and reasoning through quantum mechanics an example of rationality, then you haven’t understood Deutsch’s/my stance on rationality. I think you should study it, ask more questions about it, before you’re ready to criticize it.
After reading some more about Deutsch's and your definition of reason. Is it accurate to view reason more as a process than a static state? Where the process might be summed up by
1. Being open to criticism
2. Truth-seeking (commitment to getting ideas to jibe)
Thanks for asking good questions.
Is it accurate to view reason more as a process than a static state?
Yes.
Where the process might be summed up by
1. Being open to criticism
2. Truth-seeking (commitment to getting ideas to jibe)
Yes. Aka ‘common-preference finding’ aka ‘fun’.
Some of the virtues that @benjamin-davies has put together are part of it, too.
Interesting! Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it. Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it.
Agreed. There’s more to it than meets the eye. For example, maybe capitalism can be thought of as society-wide common-preference finding (#3013). Rationality might work the same way across minds as it does within a single mind. Capitalism as an expression of rationality in society.
As for virtues, I think some virtues are more fundamental than others. There are some virtues I think people should adopt. Like, rationality depends on them. But the core functionality of the mind as a whole does not. There’s a difference between creativity and rationality. Which virtues someone adopts and why and how they prioritize them in different situations is downstream of creativity as a whole.
I don’t know if activating higher virtues always resolves conflicts between ideas. But it could put them on hold for a while, yeah. If I see a venomous snake, my main priority is to get to safety (life as the ultimate value, as objectivists would say).
The act of making different types of idea jibe (propositional ideas, feelings etc.), doesn’t seem to me to be best explained as a rational process. They don’t have a shared metric or inter-translatability that would enable comparison. If feelings and other non-rational mental contents cannot be reduced to explicit reasons, then the process of integrating them cannot itself be arrived at through reasoning alone. This doesn’t mean reason cannot critique feelings or other non-rational content, only that the integrative process itself operates differently than rational deliberation.
…feelings and other nonrational mental contents…
Feelings aren’t “nonrational” per se. There’s a rational place for feelings. See #3632: I mean no disrespect when I say this but I think you don’t yet understand the notion of rationality I use.
Do you mean something more than finding unanimous consent between different kinds of ideas about rationality?
…cannot be reduced to explicit reasons…
Favoring explicit ideas over inexplicit ones is an example of irrationality.
By what criterion do you evaluate an explicit idea versus an implicit idea?
Maybe I don’t understand the question, but I don’t think there’s a one-size-fits-all criterion to use for that scenario. It depends on the content of the ideas and how they conflict exactly.
All I can say without more info is that we can try to criticize ideas and adopt the ones with no pending criticisms. That’s true for any kind of idea – explicit, inexplicit, conscious, unconscious, executable, etc. See #2281.
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
Yeah, I think so.
If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
For example, you could observe that you’re feeling sad even though only good things have been happening to you. So the sadness doesn’t make sense (at least on the surface). And then you can introspect from there.
…rational deliberation.
Rationality isn’t the same as deliberation. Deliberation can be part of a rational process but it’s not synonymous with it.