“Can you live your life 100% guided by reason?”
Showing only those parts of the discussion that lead to #3732 and its comments.
See full discussion·See most recent related ideasLog in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas.Living according to reason and rationality alone is impossible, because propositional knowledge is only a subset of needed knowledge for an embodied agent (the others being procedural, participatory- and perspectival knowledge)
Calling people “embodied agent[s]” like they’re barely superior to video-game characters is dehumanizing and weird.
This is also borrowed from cognitive science. But what I meant was to point to the fact that there are “pre-conceptual” models, desires, attentional salience etc. that impinge on and filter input to conscious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to “move around” in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we’re close to the truth, we understand, we grasp a concept, we arrive at a conclusion.
An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to "move around" in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we're close to the truth, we under-stand, we grasp a concept, we arrive at a conclusion.
That has nothing to do with brain regions. An AGI running on a laptop would use the same phrases.
Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.