Search Ideas
I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a pre-given frame.
I might be confused about what you mean by tractible. But it seems to me that tractability can't do the work the bounty asks. Tractability is formally defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
An interesting example from cognitive science is the Mutilated Chessboard Problem, which asks whether a board with two same-coloured corners removed can be tiled by dominoes. As a tiling problem the search space is combinatorially explosive. But reframe it as a colour problem and the answer is easy. Every domino covers one black and one white square, and you have unequal numbers of each. The solution came not from searching harder, but from seeing the problem differently.
I think the core of universal creativity isn't about efficiency, it's the open-ended capacity to restructure what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
I think this shows that tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.
Moved the criticism of 4694
This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.
I think DD's view is that creativity is problem-solving at a meta level. True knowledge creation occurs when the problem space itself is reformulated, not by ironing out implications of existing theories. An AI is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a frame; it doesn't generate one.
This is why tractability can't do the work the bounty asks. Tractability is defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.
By Tractible, do you mean "efficient relative to fixed task"?
By Tractible, do you mean "efficient relative to fixed task"?
"Complexity" in the sense of growth behavior with input size?
Yes.
I can see how an "implementation" of one algorithm in practice can accidentally change it to another algorithm.
Not sure why you put that in scare quotes. You might be right in the CS sense where ‘algorithm’ refers to an abstract procedure whereas ‘implementation’ is concrete code realizing that algorithm. (Though as a disclaimer, I don’t have a CS degree. My experience with programming is fully on-the-job.)
My point is more that two different implementations that compute the same function can have different big O. In that case, they’re usually considered different algorithms, even if the high-level goal is the same.
Regardless, the structure of the program is by far the most important factor determining performance characteristics. If you were saying that complexity is independent of implementation only insofar as the implementation truly implements the same algorithm, then I agree. So I’m not sure whether I should mark this as a counter-criticism. For now I won’t, pending new evidence.
"Complexity" in the sense of growth behavior with input size? Further reading is still suggesting to me that this is intrinsic to a given algorithm (or class of them). Intrinsic to the math and logic. Implementations can be faster/slower/hungrier for a given input, but if they have different limiting behavior, aren't they different algorithms? I can see how an "implementation" of one algorithm in practice can accidentally change it to another algorithm.
This is a good point, related to Dirk's #4813. As far as the bounty goes, I think my response in #4823 applies here as well, however. To refine it:
Recognizing, criticizing, and being able to understand explanations could all be requisites for tractably synthesizing any possible explanation. The bounty regards whether the tractability requirement can be done without.
It seems like a mind being able to create, recognize, understand and differentiate (etc.) good explanations are necessary but not sufficient criteria for personhood; if that process is intractable, then beyond a certain amount of current knowledge (considering that as the input to the process), the person effectively cannot continue with it... so that compromises the universality.
They must be able to create, recognize and understand any given explanation, and maintain that ability as their knowledge grows, ad infinitum...
Yeah, I'm not sure why I wrote this... I remember the option for number of criticisms now. I guess it slipped my mind.
Could you elaborare? Is the point that physical experience, metaphors and other things that ground ideas don’t constrain the reach of ideas at all or only partially?
"Understanding" isn't just another way of saying "can explain.". Explaining follows from understanding, but isn't synonymous. An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
"Understanding" isn't just another way of saying "can explain." An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
The given algorithm has a complexity, independent of [the implementation]
No, the complexity depends on the implementation.
Maybe I’m misunderstanding you, but that’s how standing bounties work already.
When you fund a standing bounty, you set the number of criticisms you’re willing to pay for, and the amount for each.
If that’s something you want to do for your current bounty, you still can, before current funding runs out.
See also “How Do Bounties Work?”
The "Battle tested" badge should have a hyphen!
https://www.merriam-webster.com/dictionary/battle-tested
Bounties could pay out multiplicatively, up to a limit (e.g. 10$ per criticism, up to 3). This would preserve the incentive for bounty hunting after one criticism has already been posted.
Related to this or not, it could be useful to be able to set a bounty on a set of ideas, rather than just one. "Criticize any of these ideas for n$".
The main thread is ambiguous currently, by that reasoning: it's always gray. Having the whole thing red to indicate one or more pending criticisms below seems useful, and cool. And the offshoots from the main thread (the little curly part leading to each sub-idea) can have the new colors.
E.g.: User scrolls down the main bright red thread, past gray comment offshoots and dim red refuted criticism offshoots, until reaching the bright red pending criticisms offshoot that is the cause of the main thread being bright red. (!)
Currently, a single gray "thread" comes off an idea, and splits off into sub-ideas. A single criticism in the above scheme would turn the whole thread red, which is ambiguous.
And dimmer red for refuted criticisms, brighter red for pending ones! Default gray for comments.
The link could be put in a new tooltip, or something. Or kept as is, just without the yellow bubble, frankly.