Tyler Mills’s avatar

Tyler Mills

@tyler-mills​·​Joined Jan 2026​·​Ideas
Log in or sign up to follow Tyler or post on their wall.
  Tyler Mills commented on idea #4751.

SOLUTION: The apple programs are not the same programs one execution to the next. They are being re-evolved every time they are run. This evolution is what the person is doing, and so must be what gives rise to the experience consisting of the apple rendering.

#4751​·​Tyler MillsOP, about 1 month ago

This implies that no two instances of experience, even if seemingly identical, are caused by the same programs.

  Tyler Mills revised idea #4749.

SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.

SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.

  Tyler Mills revised idea #4748.

PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?

PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?

  Tyler Mills commented on idea #4748.

PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?

#4748​·​Tyler MillsOP, about 1 month ago

SOLUTION: The apple programs are not the same programs one execution to the next. They are being re-evolved every time they are run. This evolution is what the person is doing, and so must be what gives rise to the experience consisting of the apple rendering.

  Tyler Mills criticized idea #4749.

SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.

#4749​·​Tyler MillsOP, about 1 month ago

This suggests that programs can be “run differently” to result in a different computation. This is false because it violates Substrate Independence: the instantiation of a program is unaffected by its physical implementation. If a “context” changes what the program is computing, then that’s a different program. Suggesting that a person running the apple programs “makes them” conscious therefore is not sound. The programs are either conscious or not. If they were, by (A1), they would be people.

  Tyler Mills commented on idea #4748.

PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?

#4748​·​Tyler MillsOP, about 1 month ago

SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.

  Tyler Mills commented on idea #4747.

(7) We can be conscious of the apple imagery for the entire 5 seconds.

#4747​·​Tyler MillsOP, about 1 month ago

PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?

  Tyler Mills commented on idea #4746.

(6) Repeated running of the same fixed program is automatic, requires no creativity, and cannot constitute experience.

#4746​·​Tyler MillsOP, about 1 month ago

(7) We can be conscious of the apple imagery for the entire 5 seconds.

  Tyler Mills commented on idea #4745.

(5) Repeated running of the same fixed program, not being a person, does not make it a person.

#4745​·​Tyler MillsOP, about 1 month ago

(6) Repeated running of the same fixed program is automatic, requires no creativity, and cannot constitute experience.

  Tyler Mills commented on idea #4743.

(4) The programs rendering the apple are not people, so cannot themselves constitute experience.

#4743​·​Tyler MillsOP revised about 1 month ago

(5) Repeated running of the same fixed program, not being a person, does not make it a person.

  Tyler Mills revised idea #4741.

(4) By A1, the programs rendering the apple are not people, so cannot themselves constitute experience.

(4) The programs rendering the apple are not people, so cannot themselves constitute experience.

  Tyler Mills revised idea #4739.

(4) The programs rendering the apple are not people, so cannot themselves constitute experience.

(4) By A1, the programs rendering the apple are not people, so cannot themselves constitute experience.

  Tyler Mills posted idea #4740.

Assumption A1: Only programs that are people while running constitute qualia/experience/subjectivity/consciousness.

  Tyler Mills commented on idea #4738.

(3) The programs rendering the apple imagery must be looping until stopped, since they could not have advance knowledge of when the stimulus stops.

#4738​·​Tyler MillsOP, about 1 month ago

(4) The programs rendering the apple are not people, so cannot themselves constitute experience.

  Tyler Mills commented on idea #4737.

(2) The rendering is caused by the running of some number of programs.

#4737​·​Tyler MillsOP, about 1 month ago

(3) The programs rendering the apple imagery must be looping until stopped, since they could not have advance knowledge of when the stimulus stops.

  Tyler Mills commented on idea #4736.

(1) During the entire 5 seconds, your mind renders the image of the apple.

#4736​·​Tyler MillsOP, about 1 month ago

(2) The rendering is caused by the running of some number of programs.

  Tyler Mills started a discussion titled ‘Can qualia be separated from personhood? ’.

Can a program which is not a person constitute an experience?

Imagine you are in a pitch black room. Before your eyes, a spotlight illuminates an apple for 5 seconds before darkness returns. Among other things, your mind will render the image of the apple for the 5 seconds, then it will not (afterimages aside). Assume the physical stimulus is identical for the whole 5 seconds.

Itemized discussion below.

The discussion starts with idea #4736.

(1) During the entire 5 seconds, your mind renders the image of the apple.

  Tyler Mills criticized idea #4684.

Since evolution created genetic knowledge from nothing, it can be said to have the same "narrow creativity" as AI. The confusion over whether AI "is creative" can be resolved by saying that it is, but only narrowly (like evolution), and that the creativity defining people is universal, not limited to any domain. AI creates knowledge in domains it was designed for; AGI can create knowledge in all possible domains, each of which it designs itself.

#4684​·​Tyler MillsOP, about 1 month ago

Criticized per #4718: AIs are not "narrowly creative"; there is only creativity in the binary, universal sense, per Deutsch.

  Tyler Mills commented on criticism #4718.

Move 37 was not new knowledge. It was the winning choice in that situation before the AI ever existed, because it was deducible from the game's rules and the current board state. It was implicit knowledge, already contained in the system at that time. AlphaGo made it explicit, by finding it, like a search engine, but did not create it. If you calculate the trillionth digit of pi, you haven't created new knowledge, at least not in any sense we should mean. You have simply revealed a value that was already fixed by a definition.

The fact that Move 37 wasn't explicitly in the training data or the programmers is irrelevant to its status as knowledge. This is true for pi, and for all content created by AI at the time of this writing.

#4718​·​Tyler MillsOP, about 1 month ago

The definition of fitness that rendered Move 37 the best choice originated outside the system.

  Tyler Mills commented on criticism #4720.

If the human made Move 37 for the same reason as AlphaGo, it would not be creative. Such moves are creative when humans make them because they are not deducing them (they can't due to practical limitations). If something can be deduced, it is not creative. Creativity is the conjecture of a new structure which is not derivable/deducible/implicit via existing rules of inference. All AI-generated art is implicit in the training data and model design in the same sense, so is not being made via creativity.

#4720​·​Tyler MillsOP, about 1 month ago

This highlights the core mystery of AGI/creativity: if it is the creation of something which cannot be deduced from existing rules (yet is still helpful, hard-to-vary, knowledge-bearing, etc.), how can it be programmed? In a sense it cannot, as Deutsch writes: "...what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program." [https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence]

  Tyler Mills addressed criticism #4719.

If there had been no AlphaGo and no Move 37, and a human had made that move, as they have similar moves, it would no doubt be called creative genius (as similar moves have). Isn't the above a double standard?

#4719​·​Tyler MillsOP, about 1 month ago

If the human made Move 37 for the same reason as AlphaGo, it would not be creative. Such moves are creative when humans make them because they are not deducing them (they can't due to practical limitations). If something can be deduced, it is not creative. Creativity is the conjecture of a new structure which is not derivable/deducible/implicit via existing rules of inference. All AI-generated art is implicit in the training data and model design in the same sense, so is not being made via creativity.

  Tyler Mills addressed criticism #4718.

Move 37 was not new knowledge. It was the winning choice in that situation before the AI ever existed, because it was deducible from the game's rules and the current board state. It was implicit knowledge, already contained in the system at that time. AlphaGo made it explicit, by finding it, like a search engine, but did not create it. If you calculate the trillionth digit of pi, you haven't created new knowledge, at least not in any sense we should mean. You have simply revealed a value that was already fixed by a definition.

The fact that Move 37 wasn't explicitly in the training data or the programmers is irrelevant to its status as knowledge. This is true for pi, and for all content created by AI at the time of this writing.

#4718​·​Tyler MillsOP, about 1 month ago

If there had been no AlphaGo and no Move 37, and a human had made that move, as they have similar moves, it would no doubt be called creative genius (as similar moves have). Isn't the above a double standard?

  Tyler Mills criticized idea #4683.

AIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.

#4683​·​Tyler MillsOP, about 1 month ago

Move 37 was not new knowledge. It was the winning choice in that situation before the AI ever existed, because it was deducible from the game's rules and the current board state. It was implicit knowledge, already contained in the system at that time. AlphaGo made it explicit, by finding it, like a search engine, but did not create it. If you calculate the trillionth digit of pi, you haven't created new knowledge, at least not in any sense we should mean. You have simply revealed a value that was already fixed by a definition.

The fact that Move 37 wasn't explicitly in the training data or the programmers is irrelevant to its status as knowledge. This is true for pi, and for all content created by AI at the time of this writing.

  Tyler Mills revised idea #4685.

Move 37 was not explicitly present in the training data, nor designed by the programmers, and is extremely hard to vary (Deutsch's criterion for good explanations). Was the move present implicitly in the design of the system and/or the training data? Or inexplicitly? Does either of these mean the discovery of the move was non-creative?

Move 37 was not explicitly present in the training data, nor designed by the programmers, and is extremely hard to vary (Deutsch's criterion for good explanations). Was the move present implicitly in the design of the system and/or the training data? Or inexplicitly? Do either of these mean the discovery of the move was non-creative?

  Tyler Mills commented on criticism #4694.

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

#4694​·​Tyler MillsOP revised about 1 month ago

By the latter standard, neither nature nor random number generators are people, which is sensible; nor can nature create any given possible knowledge tractably -- this is true because the fact that all possible knowledge exists is only by way of the multiverse, which is a process that cannot be simulated in its entirety, even by a quantum computer, never mind tractability.