Knut Sondre Sæbø’s avatar

Knut Sondre Sæbø

@knut-sondre-saebo​·​Joined Sep 2024​·​Ideas
 User
Registered their account.
 Novice
Posted their first idea.
 Copy editor
Created their first revision.
 Critic
 Defender
 Beginner
Posted their 10th idea.
 Engager
Participates in three or more discussions.
 Assistant editor
Created their 10th revision.
 Private
 Shield
 Intermediate
Posted their 50th idea.
  Knut Sondre Sæbø commented on idea #4863.

Nice work on #4856. Sounds like you’re one of the few who get DD’s stance re creativity.

I don’t think you’re in the Veritula Telegram channel yet. Email me if you want to be: dh@dennishackethal.com

#4863​·​Dennis Hackethal, 8 days ago

Thanks! Creativity is one of the most interesting ideas in DD's philosophy. If you come across any articles or resources on it that you've found helpful, I'd love for you to send them over.

I'm actually in the channel, just haven't been very active.

  Knut Sondre Sæbø revised criticism #4858.

I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a pre-given frame.

I might be confused about what you mean by tractible. But it seems to me that tractability can't do the work the bounty asks. Tractability is formally defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

  Knut Sondre Sæbø revised criticism #4856.

I think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

  Knut Sondre Sæbø commented on criticism #4856.

I think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

#4856​·​Knut Sondre Sæbø, 9 days ago

An interesting example from cognitive science is the Mutilated Chessboard Problem, which asks whether a board with two same-coloured corners removed can be tiled by dominoes. As a tiling problem the search space is combinatorially explosive. But reframe it as a colour problem and the answer is easy. Every domino covers one black and one white square, and you have unequal numbers of each. The solution came not from searching harder, but from seeing the problem differently.

  Knut Sondre Sæbø addressed criticism #4694.

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

#4694​·​Tyler MillsOP revised 28 days ago

I think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

  Knut Sondre Sæbø revised criticism #4853 and unmarked it as a criticism.

This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.

I think DD's view is that creativity is problem-solving at a meta level. True knowledge creation occurs when the problem space itself is reformulated, not by ironing out implications of existing theories. An AI is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

This is why tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.

Moved the criticism of 4694

  Knut Sondre Sæbø criticized idea #4688.

This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.

#4688​·​Tyler MillsOP, 28 days ago

This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.

I think DD's view is that creativity is problem-solving at a meta level. True knowledge creation occurs when the problem space itself is reformulated, not by ironing out implications of existing theories. An AI is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.

This is why tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

  Knut Sondre Sæbø revised criticism #4850 and unmarked it as a criticism.

By Tractible, do you mean "efficient relative to fixed task"?

By Tractible, do you mean "efficient relative to fixed task"?

  Knut Sondre Sæbø criticized idea #4847.

This is a good point, related to Dirk's #4813. As far as the bounty goes, I think my response in #4823 applies here as well, however. To refine it:
Recognizing, criticizing, and being able to understand explanations could all be requisites for tractably synthesizing any possible explanation. The bounty regards whether the tractability requirement can be done without.

It seems like a mind being able to create, recognize, understand and differentiate (etc.) good explanations are necessary but not sufficient criteria for personhood; if that process is intractable, then beyond a certain amount of current knowledge (considering that as the input to the process), the person effectively cannot continue with it... so that compromises the universality.

They must be able to create, recognize and understand any given explanation, and maintain that ability as their knowledge grows, ad infinitum...

#4847​·​Tyler MillsOP, 10 days ago

By Tractible, do you mean "efficient relative to fixed task"?

  Knut Sondre Sæbø commented on criticism #4260.

A concept or idea with no experiential grounding is meaningless.

Maybe, but that’s different from confusing a parochial factor for a fundamental one.

#4260​·​Dennis HackethalOP, 2 months ago

Could you elaborare? Is the point that physical experience, metaphors and other things that ground ideas don’t constrain the reach of ideas at all or only partially?

  Knut Sondre Sæbø revised criticism #4842.

"Understanding" isn't just another way of saying "can explain." An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.

"Understanding" isn't just another way of saying "can explain.". Explaining follows from understanding, but isn't synonymous. An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.

  Knut Sondre Sæbø addressed criticism #4808.

Maybe... but "understanding" is too vague, I think. Doesn't understanding mean: can explain? But then this is just "can create any explanation" again. I think the core question is why a random program generator isn't a person, coming from Deutsch's definition of a person as a program that has explanatory universality -- can create any explanation (my thought here is that this definition isn't good enough on its own, given the random generator point).

#4808​·​Tyler MillsOP, 14 days ago

"Understanding" isn't just another way of saying "can explain." An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.

  Knut Sondre Sæbø revised criticism #4782.

Does not understand explanatory knowledge seems like a better criterion

Understanding explanatory knowledge seems like a better criterion

  Knut Sondre Sæbø addressed criticism #4781.

A random number generator does not create explanatory knowledge.

#4781​·​Dirk Meulenbelt, 17 days ago

Does not understand explanatory knowledge seems like a better criterion

  Knut Sondre Sæbø revised criticism #4256.

Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know.

Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know. Ideas have a similar limitation

  Knut Sondre Sæbø revised idea #4255 and marked it as a criticism.

Those are just spacial metaphors though. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. Can you think of any ideas that isn't rooted in an experiential perspective?

Those are still spatial metaphors. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. When we try to explain how bats perceive through echolocation, we fall back on visual simulations, because sight is the only perceptual world we know.

  Knut Sondre Sæbø commented on criticism #3769.

Humans use flight-related words even though we can’t fly. From ChatGPT:

  • Elevated (thinking, mood, language)
  • High-level (ideas, overview)
  • Soar (ambitions, prices, imagination)
  • Take off (projects, careers)
  • Grounded (arguments, people)
  • Up in the air (uncertain)
  • Overview (“over-see” from above)
  • Perspective (originally spatial vantage point)
  • Lofty (ideals, goals)
  • Aboveboard (open, visible)
  • Rise / fall (status, power, ideas)
  • Sky-high (expectations, costs)
  • Aerial view (conceptual overview)
  • Head in the clouds (impractical thinking)
#3769​·​Dennis HackethalOP, 3 months ago

Those are just spacial metaphors though. I'm not saying we can't extend our ideas through imagination, creativity etc. Only that the metaphors and concepts we use/have meaning for us, are constrained by the perspectives we can take as humans. Can you think of any ideas that isn't rooted in an experiential perspective?

  Knut Sondre Sæbø addressed criticism #3768.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.

Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.

#3768​·​Dennis HackethalOP, 3 months ago

We explain the world by postulating invisible things, but we can only understand those abstractions through concrete metaphors rooted in our physical experience. A concept or idea with no experiential grounding is meaningless.

  Knut Sondre Sæbø commented on idea #3705.

Isn't every theory infinitely underspecified ?

This stance is presumably a version of the epistemological cynicism I identify here.

#3705​·​Dennis HackethalOP, 3 months ago

Maybe scepticism is fallibilism taken too far?

  Knut Sondre Sæbø revised criticism #3752 and unmarked it as a criticism.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

  Knut Sondre Sæbø commented on criticism #3752.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3752​·​Knut Sondre Sæbø revised 3 months ago

If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.

  Knut Sondre Sæbø revised criticism #3751.

I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

  Knut Sondre Sæbø addressed criticism #3733.

Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.

#3733​·​Dennis HackethalOP, 3 months ago

I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

  Knut Sondre Sæbø revised idea #3741. The revision addresses idea #3734.

One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?

The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?

One part of my question was whether a formal criterion can be applied universally. If the citerion itself must be chosen, like for instance what brings more fun, meaning, practical utility, then by what criterion do we choose the criterion? Or is the answer simply to apply the same process of critical examination to everything that arises, until a coherent path emerges?

The other part was how you actually critize an implicit or unconcious idea. If you have an unconcious idea that gives rise to a conflicting feeling for instance, how do you critisize a feeling?

  Knut Sondre Sæbø commented on criticism #3734.

mye

How does this happen? (Not a metaphorical question.)

#3734​·​Dennis HackethalOP, 3 months ago

That was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..