Search Ideas
"Understanding" isn't just another way of saying "can explain.". Explaining follows from understanding, but isn't synonymous. An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
"Understanding" isn't just another way of saying "can explain." An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.
The given algorithm has a complexity, independent of [the implementation]
No, the complexity depends on the implementation.
Maybe I’m misunderstanding you, but that’s how standing bounties work already.
When you fund a standing bounty, you set the number of criticisms you’re willing to pay for, and the amount for each.
If that’s something you want to do for your current bounty, you still can, before current funding runs out.
See also “How Do Bounties Work?”
The "Battle tested" badge should have a hyphen!
https://www.merriam-webster.com/dictionary/battle-tested
Bounties could pay out multiplicatively, up to a limit (e.g. 10$ per criticism, up to 3). This would preserve the incentive for bounty hunting after one criticism has already been posted.
Related to this or not, it could be useful to be able to set a bounty on a set of ideas, rather than just one. "Criticize any of these ideas for n$".
The main thread is ambiguous currently, by that reasoning: it's always gray. Having the whole thing red to indicate one or more pending criticisms below seems useful, and cool. And the offshoots from the main thread (the little curly part leading to each sub-idea) can have the new colors.
E.g.: User scrolls down the main bright red thread, past gray comment offshoots and dim red refuted criticism offshoots, until reaching the bright red pending criticisms offshoot that is the cause of the main thread being bright red. (!)
Currently, a single gray "thread" comes off an idea, and splits off into sub-ideas. A single criticism in the above scheme would turn the whole thread red, which is ambiguous.
And dimmer red for refuted criticisms, brighter red for pending ones! Default gray for comments.
The link could be put in a new tooltip, or something. Or kept as is, just without the yellow bubble, frankly.
It should be made not red. Gray. Arguable even without the red criticism line idea above, since it already conflicts with the "red = criticism" motif.
The quote
indentation bar
is red, which would cause visual confusion.
Is it handy? I have yet to want to open the criticized idea in a new tab. I have only ever wanted to scroll up to see it, which is slightly irksome with the current yellow bubble hashtag-matching method. And when the criticized idea is clearly immediately above, the yellow bubbles serve no real purpose, only add visual clutter.
The yellow bubbles link to the ideas they are criticizing, which can be handy.
Reiterating/refining #3904: I think the yellow "Criticism of" bubbles can and should be replaced by a graphical indication that is much easier on the eyes. The dropdown line can be made red if the comment it leads to is a criticism, and the bubble on the criticism can be eliminated. Reading the yellow bubble to get the idea # it is referring to, then searching the ideas above for the matching # is inelegant (even if it is usually the one right above).
Could be optional, as I said. Rearrange top-level ideas as toggled. Maybe not worth the trouble. Just spitballing. See #4825.
Not understanding this criticism. Maybe my idea is unclear. I'm picturing the existing "column" of a discussion, repeated column-wise for each top-level idea. Current discussion content takes up only the left ~third of my screen, while the right two thirds of my screen is totally unused. The cost of using that real estate is more content (clutter) on screen, the benefit is less time scrolling up and down in one dimension, looking for given ideas and getting bearings, which I sometimes find tiring. A second dimension helps get bearings (e.g. "Oh yeah, this relates to that one over here near the middle of the third column." Rather than: "That one was ... 77% of the way down the page, hmm, what were some words from it that I can use to ctrl+f, grrrrr.").
Thoughts on an optional "implies" relation for ideas? I find myself commenting on one idea something which it implies, then criticizing that, but the original idea is not marked criticized. Being able to chain or bundle ideas avoids the bookkeeping issue of having to make new criticisms for each step in the chain, if one is criticized.
Agreed on both counts, but I think the bountied idea survives this...
Recognizing and criticizing ideas could be a requisite for tractably synthesizing any possible explanation (I suspect as much).
Ah, so if I understand correctly, there are two knobs affecting speed (elapsed time) for a given algorithm: the hardware, and the implementation of the algorithm. The given algorithm has a complexity, independent of those two, which is how the time and memory scales with an input.
Assumption A1: Only programs that are people, while running, can constitute qualia/experience/subjectivity/consciousness.
Universal explainers
In the context of how AGI may work – which seems to be what Tyler is mostly interested in – the concept of a universal explainer might not get us very far. Creativity is the more fundamental concept, I think.
A person is a universal explainer, yes, but he could also use his creativity to come up with reasons not to create explanations.
https://blog.dennishackethal.com/posts/explain-irrational-minds
Hi Mike, welcome to Veritula. I’m Dennis, the founder.
Take a look at the discussions for any topics that might interest you.
You can also participate in bounties.
What brings you to V?