Search Ideas
3574 ideas match your query.:
… for absolute truth, the boundaries of meaning of your terms must be completely determined.
You seem to be using ‘absolute truth’ differently than others. Wikipedia:
Absolute truth is a statement that is true at all times and in all places. It is something that is always true no matter what the circumstances. It is a fact that cannot be changed. For example, there are no round squares.
This is what I think Popper had in mind. Also that absolute truth leaves no room for deviation (which I think is the reason it’s “true at all times and in all places”). Nothing related to definitions or meanings. Popper wasn’t very interested in definitions.
Hi Rob, welcome to Veritula. It’s nice to meet another software engineer. Be sure to read ‘How Does Veritula Work?’ and ‘How Do Bounties Work?’ to make the most of V.
Re: definitions, you raise an argument others have made before, namely that language has some unavoidable ambiguity or incomplete information, which necessarily introduces error. I already addressed that argument in the article linked in the discussion header:
I don’t know if I agree that natural language is always ambiguous, but even if so, I don’t see how that implies error. We can make ambiguous but true statements. ‘I’m currently located in a hemisphere’ is ambiguous as to which hemisphere, but it’s still true. We could be silly and ask, on which planet? This one. Earth. We all know what we’re talking about.
Therefore, I disagree that we need perfect definitions or infinite precision to find absolutely true ideas. (But correct me if I’m wrong to think you’re making the same argument.)
I suggest you read the article in full, otherwise you may inadvertently make more arguments that have been addressed: https://libertythroughreason.com/fallibilism-vs-cynicism/
There’s also https://blog.dennishackethal.com/posts/don-t-take-fallibilism-too-far.
I think you run into the problem of definitions. An idea cannot be absolute, perfect truth without total, perfect, complete definitions for its terms. This isn't required for knowledge - the terms can be rough because the ideas are tentative. But for absolute truth, the boundaries of meaning of your terms must be completely determined. But, as the postmoderns pointed out, this requires infinite information - the complete determination of any one term requires its distinction from all other terms. In fact, they didn't go far enough. I'd argue you would need to know the distinction between the term and all other possible terms.
You have to know perfect definitions in order to have the idea in your head be perfectly true. Perfect definitions require infinite information, therefore you cannot know perfect truth.
Our ideas can be 100% true in the sense of absolute truth. It’s possible to come up with true ideas. There’s no criterion of truth to tell that they’re true, but they can still be true.
Whether the above idea (#4751) is refuted or not, there are no viable alternative solutions to the "PROBLEM" raised in #4752.
(Criticize this with alternative solutions).
The above idea (#4751) is the only solution to the "PROBLEM" raised in #4752.
(Criticize this with alternative solutions).
The above idea (#4751) is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).
This idea (#4751) is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).
Assumption A1: Only programs that are people can, while running, constitute qualia/experience/subjectivity/consciousness.
This is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).
This is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternatives solutions).
To clarify and add on to #4805: No, we couldn't program an LLM (on its own) to do random variation in the sense constituting evolution, because all of the randomly chosen changes to its outputs are still implicit from its current knowledge (training data + design from programmers). There is also no means of criticism that are not also implicit: any niche or criterion it generates, then seeks to satisfy, was derived again from its existing knowledge. It is a closed system (whether or not we have run it such as to reveal everything it implies!).
#4806 is saying: variations of knowledge being agnostic to that knowledge's meaning means they are not implicit from it, else implicit doesn't mean anything. So #4806 is only really asking if what matters is the source of knowledge, and that isn't really a criticism of #4805.
Criticism #4875 applies to #4806, as shown.
Yes, everything is not implied by everything else, so I think what we must mean by implicit is: can be deduced from/assembled using available transformations.
For knowledge to be truly novel in the sense of having come from creativity, it must not be deducible. Ambient, unjustified substrate is "taken from the environment" and filtered by selection. What survives can be increasingly truth-containing.
Mutations to a substrate, meaning blind mutations, not specific or designed, must not be implicit from the substrate; the result of their application cannot be deduced in any way... Otherwise the knowledge they might contain would already have been present...
Knut, you’ve won the bounty. You need to integrate with Stripe to get paid.
I agree this feature should be optional and toggleable but that doesn’t address its (potential) shortcomings. It just kinda hides them.
I’m saying it’s not clear to be how deeply nested comments would be shown.
If I’m understanding you correctly, you dislike having to scroll up and down in a discussion. You see empty space on the right and you think it should be filled. Hence your suggestion to put top-level ideas next to each other rather than on top of each other.
But then where do comments on each top-level idea go? Do they still go underneath? Nesting needs indentation. So that means deep nesting gets lots of indentation. So there’ll still be plenty of empty space.
Those are the kinds of things we’d need to figure out to have a mature design proposal ready for implementation.
Thanks! Creativity is one of the most interesting ideas in DD's philosophy. If you come across any articles or resources on it that you've found helpful, I'd love for you to send them over.
I'm actually in the channel, just haven't been very active.
Upon review, we should maybe say instead that personhood should not be defined solely in terms of tractability, which the bounty terms are not clear about. As it stands (bounty aside), I find myself still seeing tractability as an important aspect of epistemology and the mystery of personhood/knowledge creation, a hunch reinforced as I continue reading through "Why Philosophers Should Care About Computational Complexity" by Scott Aaronson: https://www.scottaaronson.com/papers/philos.pdf
This may be too subjective, but I've always really disliked end-of-line hyphenation, of the kind currently used here. I find it pretty disruptive to the flow of reading, AND a source of visual clutter. That's a heavy cost for the supposed benefit of a justified margin, but we don't seem to be getting that benefit here either; the margin still appears jagged. A justified margin itself is unnecessary, if you ask me, but it can in any case be accomplished the other way, where small spaces are distributed between words in each line as needed. To me the latter method of the two is better for readability, no contest. I would advocate for the third/default method, here (jagged margin, no funny business), since justified margins seems needlessly formal.
I agree that tractability is related to a given problem space, and that creativity is about reshaping the problem space, among other things. Given that I've been thinking of the problem space as the space of all explanations, I'm not sure where I stand... Maybe the "space of all explanations" framing is wrongheaded, because a mind never has any actionable knowledge of that space? We can discuss the space of all explanations in some sense, but we can't organize or describe it in any substantive way...
Also, per #4865, you helped me remember that personhood could involve intractable algorithms, but ones which only ever run with small inputs, since that can still be perfectly practical. Whether or not that means the whole person is a tractable algorithm or not, I'm not sure.
Between these points I think this is enough for you to claim the bounty, because it does argue that personhood "should not be defined in terms of tractability", per the bounty terms (italics mine, here). Tractability does not help explain personhood. Or, in any case, it doesn't seem like this line of discussion will be very fruitful (but this could itself be mistaken).
"Secondarily" meaning:
Implementations of an algorithm inherit the algorithm’s asymptotic behavior. If an implementation has a different asymptotic behavior than one algorithm, it is effectively a different algorithm.
Yes, my understanding is that the standard sense of tractable, for some algorithm, is: can be executed in time that grows at worst by a polynomial function of the input size. This is the sense I mean. The fixed task would be: create a given explanation in the space of all possible explanations.
Implementations of a given algorithm can be way more or less efficient in practice, though. Maybe personhood does require intractable algorithms, but ones which only ever run with small inputs... The question of the bounty is whether can we make a case for or against this. But part of the hope is also to learn if this whole framing is mistaken.
I think I see now, and agree with the above. Partly a semantics issue (yes, I'm thinking of an algorithm in the "formal" CS sense: an abstract/mathematical finite procedure). The scare quotes were meant to suggest that one could attempt to implement one algorithm, but the implementation may in fact be more closely implementing some other unrelated algorithm, but this is confusing.
At any rate, how ChatGPT summarized it makes sense to me:
"One function → many algorithms can compute it.
One algorithm → many implementations can realize it.
Complexity attaches primarily to algorithms, secondarily to implementations, and not to functions."