Tyler Mills’s avatar

Tyler Mills

@tyler-mills​·​Joined Jan 2026​·​Ideas
Log in or sign up to follow Tyler or post on their wall.
 User
Registered their account.
 Initiator
Started their first discussion.
 Novice
Posted their first idea.
 Critic
 Beginner
Posted their 10th idea.
 Defender
 Private
 Copy editor
Created their first revision.
 Shield
 Engager
Participates in three or more discussions.
 Intermediate
Posted their 50th idea.
 Assistant editor
Created their 10th revision.
 Advanced
Posted their 100th idea.
 Lieutenant
 Watchman
  Tyler Mills revised idea #4887.

The above idea (#4751) is the only solution to the "PROBLEM" raised in #4752.
(Criticize this with alternative solutions).

Whether the above idea (#4751) is refuted or not, there are no viable alternative solutions to the "PROBLEM" raised in #4752.
(Criticize this with alternative solutions).

  Tyler Mills revised idea #4885.

The above idea (#4751) is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).

The above idea (#4751) is the only solution to the "PROBLEM" raised in #4752.
(Criticize this with alternative solutions).

  Tyler Mills revised idea #4883.

This idea (#4751) is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).

The above idea (#4751) is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).

  Tyler Mills revised idea #4879.

This is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).

This idea (#4751) is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).

  Tyler Mills revised idea #4820.

Assumption A1: Only programs that are people, while running, can constitute qualia/experience/subjectivity/consciousness.

Assumption A1: Only programs that are people can, while running, constitute qualia/experience/subjectivity/consciousness.

  Tyler Mills revised idea #4878.

This is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternatives solutions).

This is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternative solutions).

  Tyler Mills commented on idea #4751.

SOLUTION: The apple programs are not the same programs one execution to the next. They are being re-evolved every time they are run. This evolution is what the person is doing, and so must be what gives rise to the experience consisting of the apple rendering.

#4751​·​Tyler MillsOP, about 1 month ago

This is the only solution to the "apple problem" raised in #4752.
(Criticize this with alternatives solutions).

  Tyler Mills addressed criticism #4803.

If only some of the criteria are stored, and the rest are random, is it still evolution? Is evolution only happening if there is random variation? But we could program an LLM to do that as well...

#4803​·​Tyler MillsOP revised 27 days ago

To clarify and add on to #4805: No, we couldn't program an LLM (on its own) to do random variation in the sense constituting evolution, because all of the randomly chosen changes to its outputs are still implicit from its current knowledge (training data + design from programmers). There is also no means of criticism that are not also implicit: any niche or criterion it generates, then seeks to satisfy, was derived again from its existing knowledge. It is a closed system (whether or not we have run it such as to reveal everything it implies!).

  Tyler Mills addressed criticism #4806.

Even if variations are agnostic to any meaning or context of the knowledge, why are they still not implicit? Anything is implicit from anything else, if implicit just means: follows from when a given change is applied... The whole question is where the change is coming from... (?)

#4806​·​Tyler MillsOP, 27 days ago

#4806 is saying: variations of knowledge being agnostic to that knowledge's meaning means they are not implicit from it, else implicit doesn't mean anything. So #4806 is only really asking if what matters is the source of knowledge, and that isn't really a criticism of #4805.
Criticism #4875 applies to #4806, as shown.

  Tyler Mills addressed criticism #4806.

Even if variations are agnostic to any meaning or context of the knowledge, why are they still not implicit? Anything is implicit from anything else, if implicit just means: follows from when a given change is applied... The whole question is where the change is coming from... (?)

#4806​·​Tyler MillsOP, 27 days ago

Yes, everything is not implied by everything else, so I think what we must mean by implicit is: can be deduced from/assembled using available transformations.

For knowledge to be truly novel in the sense of having come from creativity, it must not be deducible. Ambient, unjustified substrate is "taken from the environment" and filtered by selection. What survives can be increasingly truth-containing.

Mutations to a substrate, meaning blind mutations, not specific or designed, must not be implicit from the substrate; the result of their application cannot be deduced in any way... Otherwise the knowledge they might contain would already have been present...

  Tyler Mills commented on idea #4867.

I agree that tractability is related to a given problem space, and that creativity is about reshaping the problem space, among other things. Given that I've been thinking of the problem space as the space of all explanations, I'm not sure where I stand... Maybe the "space of all explanations" framing is wrongheaded, because a mind never has any actionable knowledge of that space? We can discuss the space of all explanations in some sense, but we can't organize or describe it in any substantive way...

Also, per #4865, you helped me remember that personhood could involve intractable algorithms, but ones which only ever run with small inputs, since that can still be perfectly practical. Whether or not that means the whole person is a tractable algorithm or not, I'm not sure.

Between these points I think this is enough for you to claim the bounty, because it does argue that personhood "should not be defined in terms of tractability", per the bounty terms (italics mine, here). Tractability does not help explain personhood. Or, in any case, it doesn't seem like this line of discussion will be very fruitful (but this could itself be mistaken).

#4867​·​Tyler MillsOP, 17 days ago

Upon review, we should maybe say instead that personhood should not be defined solely in terms of tractability, which the bounty terms are not clear about. As it stands (bounty aside), I find myself still seeing tractability as an important aspect of epistemology and the mystery of personhood/knowledge creation, a hunch reinforced as I continue reading through "Why Philosophers Should Care About Computational Complexity" by Scott Aaronson: https://www.scottaaronson.com/papers/philos.pdf

  Tyler Mills posted criticism #4868.

This may be too subjective, but I've always really disliked end-of-line hyphenation, of the kind currently used here. I find it pretty disruptive to the flow of reading, AND a source of visual clutter. That's a heavy cost for the supposed benefit of a justified margin, but we don't seem to be getting that benefit here either; the margin still appears jagged. A justified margin itself is unnecessary, if you ask me, but it can in any case be accomplished the other way, where small spaces are distributed between words in each line as needed. To me the latter method of the two is better for readability, no contest. I would advocate for the third/default method, here (jagged margin, no funny business), since justified margins seems needlessly formal.

  Tyler Mills commented on criticism #4860.

I think tractibility lacks the open-ended capacity to reformulate what counts as a problem, a solution, and relevant data. Creativity is (at least partially) the ability to reformulate the problem space itself, not by ironing out implications of existing theories. An AI and computational systems is already good at ironing out the implications in our language and existing knowledge systems. But that's search within a given space, not the creation of a new one. Creativity seems to work on a higher level. It's operating at the level of problem framing, which requires things like relevance. An AI can't create new relevance, because its weights are a statistical compression of what humans have already found relevant. It inherits a pre-given frame.

I might be confused about what you mean by tractible. But it seems to me that tractability can't do the work the bounty asks. Tractability is formally defined relative to a fixed problem space. But universal creativity is (at least partially) the capacity to restructure the space, to change what counts as a problem, a solution, and relevant data.

#4860​·​Knut Sondre Sæbø revised 20 days ago

I agree that tractability is related to a given problem space, and that creativity is about reshaping the problem space, among other things. Given that I've been thinking of the problem space as the space of all explanations, I'm not sure where I stand... Maybe the "space of all explanations" framing is wrongheaded, because a mind never has any actionable knowledge of that space? We can discuss the space of all explanations in some sense, but we can't organize or describe it in any substantive way...

Also, per #4865, you helped me remember that personhood could involve intractable algorithms, but ones which only ever run with small inputs, since that can still be perfectly practical. Whether or not that means the whole person is a tractable algorithm or not, I'm not sure.

Between these points I think this is enough for you to claim the bounty, because it does argue that personhood "should not be defined in terms of tractability", per the bounty terms (italics mine, here). Tractability does not help explain personhood. Or, in any case, it doesn't seem like this line of discussion will be very fruitful (but this could itself be mistaken).

  Tyler Mills commented on idea #4864.

I think I see now, and agree with the above. Partly a semantics issue (yes, I'm thinking of an algorithm in the "formal" CS sense: an abstract/mathematical finite procedure). The scare quotes were meant to suggest that one could attempt to implement one algorithm, but the implementation may in fact be more closely implementing some other unrelated algorithm, but this is confusing.

At any rate, how ChatGPT summarized it makes sense to me:
"One function → many algorithms can compute it.
One algorithm → many implementations can realize it.
Complexity attaches primarily to algorithms, secondarily to implementations, and not to functions."

#4864​·​Tyler MillsOP, 17 days ago

"Secondarily" meaning:
Implementations of an algorithm inherit the algorithm’s asymptotic behavior. If an implementation has a different asymptotic behavior than one algorithm, it is effectively a different algorithm.

  Tyler Mills commented on idea #4851.

By Tractible, do you mean "efficient relative to fixed task"?

#4851​·​Knut Sondre Sæbø revised 20 days ago

Yes, my understanding is that the standard sense of tractable, for some algorithm, is: can be executed in time that grows at worst by a polynomial function of the input size. This is the sense I mean. The fixed task would be: create a given explanation in the space of all possible explanations.

Implementations of a given algorithm can be way more or less efficient in practice, though. Maybe personhood does require intractable algorithms, but ones which only ever run with small inputs... The question of the bounty is whether can we make a case for or against this. But part of the hope is also to learn if this whole framing is mistaken.

  Tyler Mills commented on idea #4849.

"Complexity" in the sense of growth behavior with input size?

Yes.

I can see how an "implementation" of one algorithm in practice can accidentally change it to another algorithm.

Not sure why you put that in scare quotes. You might be right in the CS sense where ‘algorithm’ refers to an abstract procedure whereas ‘implementation’ is concrete code realizing that algorithm. (Though as a disclaimer, I don’t have a CS degree. My experience with programming is fully on-the-job.)

My point is more that two different implementations that compute the same function can have different big O. In that case, they’re usually considered different algorithms, even if the high-level goal is the same.

Regardless, the structure of the program is by far the most important factor determining performance characteristics. If you were saying that complexity is independent of implementation only insofar as the implementation truly implements the same algorithm, then I agree. So I’m not sure whether I should mark this as a counter-criticism. For now I won’t, pending new evidence.

#4849​·​Dennis Hackethal, 21 days ago

I think I see now, and agree with the above. Partly a semantics issue (yes, I'm thinking of an algorithm in the "formal" CS sense: an abstract/mathematical finite procedure). The scare quotes were meant to suggest that one could attempt to implement one algorithm, but the implementation may in fact be more closely implementing some other unrelated algorithm, but this is confusing.

At any rate, how ChatGPT summarized it makes sense to me:
"One function → many algorithms can compute it.
One algorithm → many implementations can realize it.
Complexity attaches primarily to algorithms, secondarily to implementations, and not to functions."

  Tyler Mills addressed criticism #4840.

The given algorithm has a complexity, independent of [the implementation]

No, the complexity depends on the implementation.

#4840​·​Dennis Hackethal, 24 days ago

"Complexity" in the sense of growth behavior with input size? Further reading is still suggesting to me that this is intrinsic to a given algorithm (or class of them). Intrinsic to the math and logic. Implementations can be faster/slower/hungrier for a given input, but if they have different limiting behavior, aren't they different algorithms? I can see how an "implementation" of one algorithm in practice can accidentally change it to another algorithm.

  Tyler Mills commented on criticism #4843.

"Understanding" isn't just another way of saying "can explain.". Explaining follows from understanding, but isn't synonymous. An RNG could by chance generate a good explanation, but it doesn't understand it, and therefore can't distinguish it from garbage. Understanding involves recognizing that something is a good explanation. It is conscious understanding that makes conjecture and criticism possible. Without it, you have no criticism, only random selection. What do you think of the suggestion that what's lacking from the explanatory universality definition, is an intelligent selection mechanism. A random program can generate any explanation given infinite time, but it will never select which explanation is good.

#4843​·​Knut Sondre Sæbø revised 24 days ago

This is a good point, related to Dirk's #4813. As far as the bounty goes, I think my response in #4823 applies here as well, however. To refine it:
Recognizing, criticizing, and being able to understand explanations could all be requisites for tractably synthesizing any possible explanation. The bounty regards whether the tractability requirement can be done without.

It seems like a mind being able to create, recognize, understand and differentiate (etc.) good explanations are necessary but not sufficient criteria for personhood; if that process is intractable, then beyond a certain amount of current knowledge (considering that as the input to the process), the person effectively cannot continue with it... so that compromises the universality.

They must be able to create, recognize and understand any given explanation, and maintain that ability as their knowledge grows, ad infinitum...

  Tyler Mills commented on criticism #4839.

Maybe I’m misunderstanding you, but that’s how standing bounties work already.

When you fund a standing bounty, you set the number of criticisms you’re willing to pay for, and the amount for each.

If that’s something you want to do for your current bounty, you still can, before current funding runs out.

See also “How Do Bounties Work?”

#4839​·​Dennis HackethalOP, 24 days ago

Yeah, I'm not sure why I wrote this... I remember the option for number of criticisms now. I guess it slipped my mind.

  Tyler Mills posted criticism #4838.

The "Battle tested" badge should have a hyphen!
https://www.merriam-webster.com/dictionary/battle-tested

  Tyler Mills posted idea #4837.

Bounties could pay out multiplicatively, up to a limit (e.g. 10$ per criticism, up to 3). This would preserve the incentive for bounty hunting after one criticism has already been posted.

  Tyler Mills commented on idea #4824.

Thoughts on an optional "implies" relation for ideas? I find myself commenting on one idea something which it implies, then criticizing that, but the original idea is not marked criticized. Being able to chain or bundle ideas avoids the bookkeeping issue of having to make new criticisms for each step in the chain, if one is criticized.

#4824​·​Tyler Mills, 24 days ago

Related to this or not, it could be useful to be able to set a bounty on a set of ideas, rather than just one. "Criticize any of these ideas for n$".

  Tyler Mills addressed criticism #4834.

Currently, a single gray "thread" comes off an idea, and splits off into sub-ideas. A single criticism in the above scheme would turn the whole thread red, which is ambiguous.

#4834​·​Tyler Mills, 24 days ago

The main thread is ambiguous currently, by that reasoning: it's always gray. Having the whole thing red to indicate one or more pending criticisms below seems useful, and cool. And the offshoots from the main thread (the little curly part leading to each sub-idea) can have the new colors.

E.g.: User scrolls down the main bright red thread, past gray comment offshoots and dim red refuted criticism offshoots, until reaching the bright red pending criticisms offshoot that is the cause of the main thread being bright red. (!)

  Tyler Mills addressed criticism #4827.

Reiterating/refining #3904: I think the yellow "Criticism of" bubbles can and should be replaced by a graphical indication that is much easier on the eyes. The dropdown line can be made red if the comment it leads to is a criticism, and the bubble on the criticism can be eliminated. Reading the yellow bubble to get the idea # it is referring to, then searching the ideas above for the matching # is inelegant (even if it is usually the one right above).

#4827​·​Tyler Mills, 24 days ago

Currently, a single gray "thread" comes off an idea, and splits off into sub-ideas. A single criticism in the above scheme would turn the whole thread red, which is ambiguous.