Knut Sondre Sæbø
@knut-sondre-saebo·Joined Sep 2024·Ideas
#4863·Dennis Hackethal, 8 days agoNice work on #4856. Sounds like you’re one of the few who get DD’s stance re creativity.
I don’t think you’re in the Veritula Telegram channel yet. Email me if you want to be: dh@dennishackethal.com
Thanks! Creativity is one of the most interesting ideas in DD's philosophy. If you come across any articles or resources on it that you've found helpful, I'd love for you to send them over.
I'm actually in the channel, just haven't been very active.
I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a pre-given frame.
I might be confused about what you mean by tractible. But it seems to me that tractability can't do the work the bounty asks. Tractability is formally defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
I think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
#4856·Knut Sondre Sæbø, 9 days agoI think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
An interesting example from cognitive science is the Mutilated Chessboard Problem, which asks whether a board with two same-coloured corners removed can be tiled by dominoes. As a tiling problem the search space is combinatorially explosive. But reframe it as a colour problem and the answer is easy. Every domino covers one black and one white square, and you have unequal numbers of each. The solution came not from searching harder, but from seeing the problem differently.
#4694·Tyler MillsOP revised 28 days agoBy this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.
I think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.
I think DD's view is that creativity is problem-solving at a meta level. True knowledge creation occurs when the problem space itself is reformulated, not by ironing out implications of existing theories. An AI is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
This is why tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.
Moved the criticism of 4694
#4688·Tyler MillsOP, 28 days agoThis also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.
This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.
I think DD's view is that creativity is problem-solving at a meta level. True knowledge creation occurs when the problem space itself is reformulated, not by ironing out implications of existing theories. An AI is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
This is why tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
By Tractible, do you mean "efficient relative to fixed task"?
By Tractible, do you mean "efficient relative to fixed task"?
#4847·Tyler MillsOP, 10 days agoThis is a good point, related to Dirk's #4813. As far as the bounty goes, I think my response in #4823 applies here as well, however. To refine it:
Recognizing, criticizing, and being able to understand explanations could all be requisites for tractably synthesizing any possible explanation. The bounty regards whether the tractability requirement can be done without.It seems like a mind being able to create, recognize, understand and differentiate (etc.) good explanations are necessary but not sufficient criteria for personhood; if that process is intractable, then beyond a certain amount of current knowledge (considering that as the input to the process), the person effectively cannot continue with it... so that compromises the universality.
They must be able to create, recognize and understand any given explanation, and maintain that ability as their knowledge grows, ad infinitum...
By Tractible, do you mean "efficient relative to fixed task"?
#4260·Dennis HackethalOP, 2 months agoA concept or idea with no experiential grounding is meaningless.
Maybe, but that’s different from confusing a parochial factor for a fundamental one.
Could you elaborare? Is the point that physical experience, metaphors and other things that ground ideas don’t constrain the reach of ideas at all or only partially?
"Understanding" isn't just another way of saying "can explain." An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
"Understanding" isn't just another way of saying "can explain.". Explaining follows from understanding, but isn't synonymous. An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
#4808·Tyler MillsOP, 14 days agoMaybe... but "understanding" is too vague, I think. Doesn't understanding mean: can explain? But then this is just "can create any explanation" again. I think the core question is why a random program generator isn't a person, coming from Deutsch's definition of a person as a program that has explanatory universality -- can create any explanation (my thought here is that this definition isn't good enough on its own, given the random generator point).
"Understanding" isn't just another way of saying "can explain." An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
Does not understand explanatory knowledge seems like a better criterion
Understanding explanatory knowledge seems like a better criterion
#4781·Dirk Meulenbelt, 17 days agoA random number generator does not create explanatory knowledge.
Does not understand explanatory knowledge seems like a better criterion
Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know.
Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know. Ideas have a similar limitation
Those are just spacial metaphors though. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. Can you think of any ideas that isn't rooted in an experiential perspective?
Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know.
#3769·Dennis HackethalOP, 3 months agoHumans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
Those are just spacial metaphors though. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. Can you think of any ideas that isn't rooted in an experiential perspective?
#3768·Dennis HackethalOP, 3 months agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
We explain the world by postulating invisible things, but we can only understand those abstractions through concrete metaphors rooted in our physical experience. A concept or idea with no experiential grounding is meaningless.
#3705·Dennis HackethalOP, 3 months agoIsn't every theory infinitely underspecified ?
This stance is presumably a version of the epistemological cynicism I identify here.
Maybe scepticism is fallibilism taken too far?
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3752·Knut Sondre Sæbø revised 3 months agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3733·Dennis HackethalOP, 3 months agoOr it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?
The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?
That was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..