“Can you live your life 100% guided by reason?”

Showing only those parts of the discussion that lead to #3623 and its comments.

See full discussion·See most recent related ideas
  Log in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas.

Discussions can branch out indefinitely. Zoom out for the bird’s-eye view.
Knut Sondre Sæbø’s avatar
3rd of 3 versions leading to #3623 (3 total)

Living according to reason and rationality alone is impossible, because propositional knowledge is only a subset of needed knowledge for an embodied agent (the others being procedural, participatory- and perspectival knowledge)

CriticismCriticized2*
Dennis Hackethal’s avatar

Calling people “embodied agent[s]” like they’re barely superior to video-game characters is dehumanizing and weird.

Criticism of #3626
Knut Sondre Sæbø’s avatar
2nd of 3 versions

This is also borrowed from cognitive science. But what's I meant was to point to the fact there is "pre-conceptual" models, desires, attential salience etc. that impinge on and filters input to concious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to "move around" in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we're close to the truth, we under-stand, we grasp a concept, we arrive at a conclusion.

Criticism of #3605Criticized5
Dennis Hackethal’s avatar

Several typos here. Please use more care when you write ideas.

Criticism of #3623
Dennis Hackethal’s avatar

An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to "move around" in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we're close to the truth, we under-stand, we grasp a concept, we arrive at a conclusion.

That has nothing to do with brain regions. An AGI running on a laptop would use the same phrases.

Criticism of #3623
Knut Sondre Sæbø’s avatar
4th of 4 versions

Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas? Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?

Dennis Hackethal’s avatar

Why would an AGI use spacial metaphors like understand, arrive, close to understand ideas?

Because it would be a product of our culture and speak English.

Knut Sondre Sæbø’s avatar

Aah, then I agree. I thought you meant AGI would develop the same metaphors independently.

Dennis Hackethal’s avatar

Don't you think our particular perspective (which is filtered through the body as sense perception) affects our conceptual system and ways we understand ideas?

Parochially. Culture has more impact.

Dennis Hackethal’s avatar

But an AGI might not develop such phrases independently. (See #3730.)

Criticism of #3629Criticized1
Dennis Hackethal’s avatar

Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.

Criticism of #3732
Knut Sondre Sæbø’s avatar
3rd of 3 versions

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

Criticized2
Knut Sondre Sæbø’s avatar

If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.

Dennis Hackethal’s avatar

So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.

Criticism of #3755
Dennis Hackethal’s avatar

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.

Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.

Criticism of #3755
Dennis Hackethal’s avatar

This is also borrowed from cognitive science.

Yeah, the cog-sci guys don’t understand Popper or epistemology generally. They seem to view minds and brains as input/output machines. But that isn’t how that works.

Criticism of #3623
Knut Sondre Sæbø’s avatar

I think that's pretty accurate. But if you believe reality simply works by executing a formal set of fundamental rules, how can you believe anything else? By this model, any system only ever has input, output, and functions that determine how that output is generated. What else is there?

Criticism of #3630Criticized2
Knut Sondre Sæbø’s avatar

If strong emergence exist, there can "emerge" other things that have downward causation.

Criticism of #3653
Knut Sondre Sæbø’s avatar

What is the evidence for strong emergence as opposed to just vieweing every phenomena as the processing of fundamental laws?

Criticism of #3664Criticized1
Knut Sondre Sæbø’s avatar
2nd of 2 versions

Even a non-living system, can build up constraints at an aggregate which have downwards causation. After a Crystal is formed the lattice constrains which vibrational modes are possible for individual atoms. In other words being part of a larger strucutre (which follows other rules) has downard causation on "parts" following fundamental rules. There might be other emergent structures that expose other fundamental rules not encompassed by the known fundamental rules.

Criticism of #3665
Dennis Hackethal’s avatar

[A]ny system only ever has input, output, and functions that determine how that output is generated. What else is there?

Minds don’t necessarily output anything. Also, they don’t just run existing functions, they create new ones.

Criticism of #3653
Dennis Hackethal’s avatar

I don’t think any of this addresses my original criticism that calling people “embodied agent[s]” is dehumanizing. It sounds like we’re studying rats. So what if cog-sci is dehumanizing? That doesn’t make it better.

Criticism of #3623
Knut Sondre Sæbø’s avatar

Haven't thought about it like that. The purpose of speaking of an embodied agent is to generalize cognition. To understand what's relevant to an agent, you need to understand how that agent is embodied in the world.

Criticized1
Dennis Hackethal’s avatar

Again, to me, that’s how programmers think about their video-game characters, and how researchers think about lab rats in mazes. I would avoid talking about people as ‘agents’ and instead treat them as human beings.

To understand what’s relevant to a person, you need to understand their problem situation.

Criticism of #3644
Knut Sondre Sæbø’s avatar
2nd of 2 versions

I think I agree. But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium (which also has problems that shape its actions, what it finds relevant, etc.). But I agree that persons and agents should be differentiated, since people exceed the pre-given problems set by evolution.

Criticized2
Dennis Hackethal’s avatar

…a bacterium … also has problems that shape its actions, what it finds relevant, etc…

A bacterium has ‘problems’ in some sense but it cannot create new knowledge to solve them. It may be more accurate to say that its genes have problems.

Criticism of #3689
Dennis Hackethal’s avatar

But to formulate a general theory for agents, the term ‘people’ is too strong when speaking of what’s relevant for a bacterium…

Yes. This tells you that people aren’t just agents. They are agents in the sense that they exist in some environment they can interact with and move around in. But they’re so much more than that.

It’s a bit like saying humans are mammals. They are, but that’s not their distinguishing characteristic, so we can’t study mammals to learn about people.

I wouldn’t bother with cog sci or any ‘agentic’ notion of people. Focus on Popperian epistemology instead. It’s the only promising route we have.

Criticism of #3689
Dennis Hackethal’s avatar

The purpose of speaking of an embodied agent is to generalize cognition.

It’s possible that the actual purpose of such language is more sinister than that, having to do with static memes: to continue the age-old mystical tradition of portraying man as a pathetic, helpless being at the mercy of a universe he cannot understand or control.

But I’m purely speculating here and would have to think more about it. So I’m not marking this as a criticism (yet).

Knut Sondre Sæbø’s avatar
3rd of 3 versions

I don’t think so, but I don’t know enough of the history. But the framework emerged out of biology trying to make a theory of organisms in general (innate theories like autopoiesis/self-preservation, for example). Then it’s been used specifically in cognitive science to try and integrate the general framework with human cognition. Even though it is dehumanizing, there is some value to viewing at least parts of human cognition in these terms. Whatever creativity is, most of human experience is already pre-given moment to moment, not willed by the person. I don’t think we as people derive our sense of autonomy from this world construction and pre-given coupling (we receive automatic responses/affordances). The only real change I seem to have is in every conscious moment.

Criticism of #3659Criticized3
Dennis Hackethal’s avatar

Whatever creativity is, most of human experience is already pre-given moment to moment, not willed by the person.

I think what really happens is this: when we’re young, we guess theories about how to experience the world, and then we correct errors in those theories and practice them to the point they become completely automated. Much of this happens in childhood. As adults, we don’t remember doing it. So then experience seems ‘given’.

Criticism of #3686
Dennis Hackethal’s avatar

The only real change I seem to have is in every conscious moment.

I don’t know what it means to ‘have change’, but note that even unconscious ideas evolve in our minds all the time. So those change as well, if that’s what you mean.

Criticism of #3686
Dennis Hackethal’s avatar

[T]he framework emerged out of biology trying to make a theory of organisms in general…

That doesn’t mean static memes couldn’t have co-opted the framework to undermine man and his mind.

Criticism of #3686
Knut Sondre Sæbø’s avatar

Superseded by #3654. This comment was generated automatically.

Criticism of #3623