“Can you live your life 100% guided by reason?”
Dennis Hackethal started this discussion 3 months ago.
ActivityI ask Chicagoans their thoughts on reason and rationality.
https://www.youtube.com/watch?v=gRUS8dMGOF4
Log in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas, and submit new ideas.Reflecting on one's past thought and action seems to be a key component of living a life 100% guided by reason. Thinking about this has inspired me to make an effort to search for methods and tools that help systematise, formalise and improve the quality of my self-reflection.
I already have a loose journalling habit, but it is completely free of schedule, structure or method.
Would you like to try formulating an explicit methodology for how you want to use Veritula?
I noticed that you’ve started a bunch of discussions but I don’t believe you’ve reached a resolution on any of them.
I noticed that you’ve started a bunch of discussions but I don’t believe you’ve reached a resolution on any of them.
I think this is partly to do with the fact that Veritula has no clear way of indicating when a resolution has been reached or a problem has been solved.
For example, I am currently applying #2840, and it is working well. There is no obvious thing I should be doing in Veritula to note that. I would probably only bring it up again if it didn’t solve the problem in the end.
I think this is partly to do with the fact that Veritula has no clear way of indicating when a resolution has been reached or a problem has been solved.
It does. For example, you could post an idea saying ‘I have decided to do X.’ Like in your discussion on where to move.
You can also indicate resolution of top-level criticisms by archiving them when they have pending counter-criticisms. The meta discussion is an example of top-level ideas reaching resolutions in this way.
As I think about this, I notice that—once I solve a given problem with a new idea—I have no habit to consciously acknowledge that a problem has been solved, much less to write down that it has been solved. The ex-problem fades from my mind as I set my mind on a new problem.
I could try to make it a habit to explicitly acknowledge when I do find solutions to problems. If the solution is found on Veritula, it would be natural to acknowledge it here too.
I like the idea of explicitly acknowledging progress in this way, because it might help me become more prideful in the Objectivist sense.
I think this is partly to do with the fact that Veritula has no clear way of indicating when a resolution has been reached or a problem has been solved.
Should take personal responsibility and not blame the tool.
Would you like to try formulating an explicit methodology for how you want to use Veritula?
This seems like a good idea.
Closing threads is a common problem in my life. I should look for ways to increase my propensity to resolve/finish things I start.
Methods I look for need to allow for the fact that not everything needs to be resolved, i.e. that having some open threads is inevitable, and that some of those threads are acceptable to leave open indefinitely.
Idea: Keep a document tracking open threads, updating it every night. Every morning, feed it to Gemini Flash and have it coach me on what I could work towards resolving today.
If your goal, like mine, is to live a life that is 100% guided by reason, which basically means (#2844) to never adopt ideas that have pending criticisms, you could use Veritula to identify ideas of yours that have pending criticisms so you can either reject those ideas or address the criticisms.
To that end, I suggest you submit a single idea you are confident is correct, and then try your hardest to criticize it. Depending on the idea, I may join you.
It’s a good goal to perfect an idea to the point you’ve mastered it, addressed all objections, understand the objections better than your opponents, etc.
If this sounds up your alley, I recommend starting with something easy. Zelalem tried writing a summary of fallibilism which, after 13 revisions, still contains mistakes.
A recurring theme in the video is people thinking that reason is the domain of logical, explicit thought, whereas your emotions, gut, etc live in a different domain.
So to them, the question “Can you live your life 100% guided by reason?” means ‘Can you suppress your emotions, gut, etc your entire life?’
They’re right to answer ‘no’.
They’re wrong to think that’s what reason is about. See #2281, #2844.
Living according to reason and rationality alone is impossible, because propositional knowledge is only a subset of needed knowledge for an embodied agent (the others being procedural, participatory- and perspectival knowledge)
Calling people “embodied agent[s]” like they’re barely superior to video-game characters is dehumanizing and weird.
This is also borrowed from cognitive science. But what I meant was to point to the fact that there are “pre-conceptual” models, desires, attentional salience etc. that impinge on and filter input to conscious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to “move around” in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we’re close to the truth, we understand, we grasp a concept, we arrive at a conclusion.
An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to "move around" in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we're close to the truth, we under-stand, we grasp a concept, we arrive at a conclusion.
That has nothing to do with brain regions. An AGI running on a laptop would use the same phrases.
Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas? Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?
Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas?
Because it would be a product of our culture and speak English.
Aah, then I agree. I thought you meant AGI would develop the same metaphors independently.
Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?
Parochially. Culture has more impact.
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
This is also borrowed from cognitive science.
Yeah, the cog-sci guys don’t understand Popper or epistemology generally. They seem to view minds and brains as input/output machines. But that isn’t how that works.
I think that's pretty accurate. But if you believe reality simply works by executing a formal set of fundamental rules, how can you believe anything else? By this model, any system only ever has input, output, and functions that determine how that output is generated. What else is there?
If strong emergence exist, there can "emerge" other things that have downward causation.
What is the evidence for strong emergence as opposed to just vieweing every phenomena as the processing of fundamental laws?
Even a non-living system, can build up constraints at an aggregate which have downwards causation. After a Crystal is formed the lattice constrains which vibrational modes are possible for individual atoms. In other words being part of a larger strucutre (which follows other rules) has downard causation on "parts" following fundamental rules. There might be other emergent structures that expose other fundamental rules not encompassed by the known fundamental rules.
[A]ny system only ever has input, output, and functions that determine how that output is generated. What else is there?
Minds don’t necessarily output anything. Also, they don’t just run existing functions, they create new ones.
I don’t think any of this addresses my original criticism that calling people “embodied agent[s]” is dehumanizing. It sounds like we’re studying rats. So what if cog-sci is dehumanizing? That doesn’t make it better.
Haven't thought about it like that. The purpose of speaking of an embodied agent is to generalize cognition. To understand what's relevant to an agent, you need to understand how that agent is embodied in the world.
Again, to me, that’s how programmers think about their video-game characters, and how researchers think about lab rats in mazes. I would avoid talking about people as ‘agents’ and instead treat them as human beings.
To understand what’s relevant to a person, you need to understand their problem situation.
I think I agree. But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium (which also has problems that shape its actions, what it finds relevant, etc.). But I agree that persons and agents should be differentiated, since people exceed the pre-given problems set by evolution.
…a bacterium … also has problems that shape its actions, what it finds relevant, etc…
A bacterium has ‘problems’ in some sense but it cannot create new knowledge to solve them. It may be more accurate to say that its genes have problems.
But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium…
Yes. This tells you that people aren’t just agents. They are agents in the sense that they exist in some environment they can interact with and move around in. But they’re so much more than that.
It’s a bit like saying humans are mammals. They are, but that’s not their distinguishing characteristic, so we can’t study mammals to learn about people.
I wouldn’t bother with cog sci or any ‘agentic’ notion of people. Focus on Popperian epistemology instead. It’s the only promising route we have.
The purpose of speaking of an embodied agent is to generalize cognition.
It’s possible that the actual purpose of such language is more sinister than that, having to do with static memes: to continue the age-old mystical tradition of portraying man as a pathetic, helpless being at the mercy of a universe he cannot understand or control.
But I’m purely speculating here and would have to think more about it. So I’m not marking this as a criticism (yet).
I don’t think so, but I don’t know enough of the history. But the framework emerged out of biology trying to make a theory of organisms in general (innate theories like autopoiesis/self-preservation, for example). Then it’s been used specifically in cognitive science to try and integrate the general framework with human cognition. Even though it is dehumanizing, there is some value to viewing at least parts of human cognition in these terms. Whatever creativity is, most of human experience is already pre-given moment to moment, not willed by the person. I don’t think we as people derive our sense of autonomy from this world construction and pre-given coupling (we receive automatic responses/affordances). The only real change I seem to have is in every conscious moment.
Whatever creativity is, most of human experience is already pre-given moment to moment, not willed by the person.
I think what really happens is this: when we’re young, we guess theories about how to experience the world, and then we correct errors in those theories and practice them to the point they become completely automated. Much of this happens in childhood. As adults, we don’t remember doing it. So then experience seems ‘given’.
The only real change I seem to have is in every conscious moment.
I don’t know what it means to ‘have change’, but note that even unconscious ideas evolve in our minds all the time. So those change as well, if that’s what you mean.
[T]he framework emerged out of biology trying to make a theory of organisms in general…
That doesn’t mean static memes couldn’t have co-opted the framework to undermine man and his mind.
Humans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
It sounds like you have a different conception of knowledge and rationality from Popper’s/Deutsch’s.
There’s a unity of knowledge. Knowledge isn’t fragmented the way you suggest. Rationality means finding common preferences among ideas, ie making different types of ideas jibe. Why should that not be possible for the types of knowledge you mention?
Even if knowledge is unified at some fundamental level, we might not be able to live by means of this unified knowledge alone (because of how we function or pure complexity). Living life might require operating through other «kinds» of knowledge which are pre- cognitive. You cannot ride a bike or maintain a relationship by thinking through quantum mechanical or propositional theories to word.
You cannot ride a bike or maintain a relationship by thinking through quantum mechanical or propositional theories to word.
That isn’t what I mean by unity of knowledge. Of course we can’t process our knowledge in its totality at once. That’s necessarily piecemeal. But that doesn’t mean we can’t live a life guided by reason.
If you consider riding a bike an example of irrationality, and reasoning through quantum mechanics an example of rationality, then you haven’t understood Deutsch’s/my stance on rationality. I think you should study it, ask more questions about it, before you’re ready to criticize it.
After reading some more about Deutsch's and your definition of reason. Is it accurate to view reason more as a process than a static state? Where the process might be summed up by
1. Being open to criticism
2. Truth-seeking (commitment to getting ideas to jibe)
Thanks for asking good questions.
Is it accurate to view reason more as a process than a static state?
Yes.
Where the process might be summed up by
1. Being open to criticism
2. Truth-seeking (commitment to getting ideas to jibe)
Yes. Aka ‘common-preference finding’ aka ‘fun’.
Some of the virtues that @benjamin-davies has put together are part of it, too.
Interesting! Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it. Has anyone explored whether the collection of ideas in a person's mind must have a specific structure?
When discussing virtues, you seem to suggest a hierarchical organization of ideas, as opposed to ideas competing horizontally for attention and salience. It appears that ideas organize vertically in a hierarchy, where activating "higher-level" ideas automatically resolves conflicts among lower-level ones. For example, if a snake suddenly appears next to you, all previous internal conflicts dissolve because self-preservation is among the most dominant (highest) ideas in their value structure.
However, individuals can construct even higher-order values that override self-preservation. The structure seems hierarchical: when a top-level idea is activated, there seems to be some alignment in lower level ideas.
Getting ideas to jibe/cohere seems like a more and more fundamental idea the more I think about it.
Agreed. There’s more to it than meets the eye. For example, maybe capitalism can be thought of as society-wide common-preference finding (#3013). Rationality might work the same way across minds as it does within a single mind. Capitalism as an expression of rationality in society.
As for virtues, I think some virtues are more fundamental than others. There are some virtues I think people should adopt. Like, rationality depends on them. But the core functionality of the mind as a whole does not. There’s a difference between creativity and rationality. Which virtues someone adopts and why and how they prioritize them in different situations is downstream of creativity as a whole.
I don’t know if activating higher virtues always resolves conflicts between ideas. But it could put them on hold for a while, yeah. If I see a venomous snake, my main priority is to get to safety (life as the ultimate value, as objectivists would say).
The act of making different types of idea jibe (propositional ideas, feelings etc.), doesn’t seem to me to be best explained as a rational process. They don’t have a shared metric or inter-translatability that would enable comparison. If feelings and other non-rational mental contents cannot be reduced to explicit reasons, then the process of integrating them cannot itself be arrived at through reasoning alone. This doesn’t mean reason cannot critique feelings or other non-rational content, only that the integrative process itself operates differently than rational deliberation.
…feelings and other nonrational mental contents…
Feelings aren’t “nonrational” per se. There’s a rational place for feelings. See #3632: I mean no disrespect when I say this but I think you don’t yet understand the notion of rationality I use.
Do you mean something more than finding unanimous consent between different kinds of ideas about rationality?
…cannot be reduced to explicit reasons…
Favoring explicit ideas over inexplicit ones is an example of irrationality.
By what criterion do you evaluate an explicit idea versus an implicit idea?
Maybe I don’t understand the question, but I don’t think there’s a one-size-fits-all criterion to use for that scenario. It depends on the content of the ideas and how they conflict exactly.
All I can say without more info is that we can try to criticize ideas and adopt the ones with no pending criticisms. That’s true for any kind of idea – explicit, inexplicit, conscious, unconscious, executable, etc. See #2281.
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
Yeah, I think so.
If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
For example, you could observe that you’re feeling sad even though only good things have been happening to you. So the sadness doesn’t make sense (at least on the surface). And then you can introspect from there.
…rational deliberation.
Rationality isn’t the same as deliberation. Deliberation can be part of a rational process but it’s not synonymous with it.