Tyler Mills
@tyler-mills·Joined Jan 2026·Ideas
#4751·Tyler MillsOP, about 1 month agoSOLUTION: The apple programs are not the same programs one execution to the next. They are being re-evolved every time they are run. This evolution is what the person is doing, and so must be what gives rise to the experience consisting of the apple rendering.
This implies that no two instances of experience, even if seemingly identical, are caused by the same programs.
SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.
SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.
PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?
PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?
#4748·Tyler MillsOP, about 1 month agoPROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?
SOLUTION: The apple programs are not the same programs one execution to the next. They are being re-evolved every time they are run. This evolution is what the person is doing, and so must be what gives rise to the experience consisting of the apple rendering.
#4749·Tyler MillsOP, about 1 month agoSOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.
This suggests that programs can be “run differently” to result in a different computation. This is false because it violates Substrate Independence: the instantiation of a program is unaffected by its physical implementation. If a “context” changes what the program is computing, then that’s a different program. Suggesting that a person running the apple programs “makes them” conscious therefore is not sound. The programs are either conscious or not. If they were, by (A1), they would be people.
#4748·Tyler MillsOP, about 1 month agoPROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?
SOLUTION: The apple programs give rise to consciousness only in a given context. Only when run a certain why, by a person.
#4747·Tyler MillsOP, about 1 month ago(7) We can be conscious of the apple imagery for the entire 5 seconds.
PROBLEM: Why are we conscious of the apple rendering? Given (6), why is there an experience of it, if the programs comprising it are looping, and so are therefore predefined?
#4746·Tyler MillsOP, about 1 month ago(6) Repeated running of the same fixed program is automatic, requires no creativity, and cannot constitute experience.
(7) We can be conscious of the apple imagery for the entire 5 seconds.
#4745·Tyler MillsOP, about 1 month ago(5) Repeated running of the same fixed program, not being a person, does not make it a person.
(6) Repeated running of the same fixed program is automatic, requires no creativity, and cannot constitute experience.
#4743·Tyler MillsOP revised about 1 month ago(4) The programs rendering the apple are not people, so cannot themselves constitute experience.
(5) Repeated running of the same fixed program, not being a person, does not make it a person.
(4) By A1, the programs rendering the apple are not people, so cannot themselves constitute experience.
(4) The programs rendering the apple are not people, so cannot themselves constitute experience.
(4) The programs rendering the apple are not people, so cannot themselves constitute experience.
(4) By A1, the programs rendering the apple are not people, so cannot themselves constitute experience.
Assumption A1: Only programs that are people while running constitute qualia/experience/subjectivity/consciousness.
#4738·Tyler MillsOP, about 1 month ago(3) The programs rendering the apple imagery must be looping until stopped, since they could not have advance knowledge of when the stimulus stops.
(4) The programs rendering the apple are not people, so cannot themselves constitute experience.
#4737·Tyler MillsOP, about 1 month ago(2) The rendering is caused by the running of some number of programs.
(3) The programs rendering the apple imagery must be looping until stopped, since they could not have advance knowledge of when the stimulus stops.
#4736·Tyler MillsOP, about 1 month ago(1) During the entire 5 seconds, your mind renders the image of the apple.
(2) The rendering is caused by the running of some number of programs.
Can a program which is not a person constitute an experience?
Imagine you are in a pitch black room. Before your eyes, a spotlight illuminates an apple for 5 seconds before darkness returns. Among other things, your mind will render the image of the apple for the 5 seconds, then it will not (afterimages aside). Assume the physical stimulus is identical for the whole 5 seconds.
Itemized discussion below.
(1) During the entire 5 seconds, your mind renders the image of the apple.
#4684·Tyler MillsOP, about 1 month agoSince evolution created genetic knowledge from nothing, it can be said to have the same "narrow creativity" as AI. The confusion over whether AI "is creative" can be resolved by saying that it is, but only narrowly (like evolution), and that the creativity defining people is universal, not limited to any domain. AI creates knowledge in domains it was designed for; AGI can create knowledge in all possible domains, each of which it designs itself.
Criticized per #4718: AIs are not "narrowly creative"; there is only creativity in the binary, universal sense, per Deutsch.
#4718·Tyler MillsOP, about 1 month agoMove 37 was not new knowledge. It was the winning choice in that situation before the AI ever existed, because it was deducible from the game's rules and the current board state. It was implicit knowledge, already contained in the system at that time. AlphaGo made it explicit, by finding it, like a search engine, but did not create it. If you calculate the trillionth digit of pi, you haven't created new knowledge, at least not in any sense we should mean. You have simply revealed a value that was already fixed by a definition.
The fact that Move 37 wasn't explicitly in the training data or the programmers is irrelevant to its status as knowledge. This is true for pi, and for all content created by AI at the time of this writing.
The definition of fitness that rendered Move 37 the best choice originated outside the system.
#4720·Tyler MillsOP, about 1 month agoIf the human made Move 37 for the same reason as AlphaGo, it would not be creative. Such moves are creative when humans make them because they are not deducing them (they can't due to practical limitations). If something can be deduced, it is not creative. Creativity is the conjecture of a new structure which is not derivable/deducible/implicit via existing rules of inference. All AI-generated art is implicit in the training data and model design in the same sense, so is not being made via creativity.
This highlights the core mystery of AGI/creativity: if it is the creation of something which cannot be deduced from existing rules (yet is still helpful, hard-to-vary, knowledge-bearing, etc.), how can it be programmed? In a sense it cannot, as Deutsch writes: "...what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program." [https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence]
#4719·Tyler MillsOP, about 1 month agoIf there had been no AlphaGo and no Move 37, and a human had made that move, as they have similar moves, it would no doubt be called creative genius (as similar moves have). Isn't the above a double standard?
If the human made Move 37 for the same reason as AlphaGo, it would not be creative. Such moves are creative when humans make them because they are not deducing them (they can't due to practical limitations). If something can be deduced, it is not creative. Creativity is the conjecture of a new structure which is not derivable/deducible/implicit via existing rules of inference. All AI-generated art is implicit in the training data and model design in the same sense, so is not being made via creativity.
#4718·Tyler MillsOP, about 1 month agoMove 37 was not new knowledge. It was the winning choice in that situation before the AI ever existed, because it was deducible from the game's rules and the current board state. It was implicit knowledge, already contained in the system at that time. AlphaGo made it explicit, by finding it, like a search engine, but did not create it. If you calculate the trillionth digit of pi, you haven't created new knowledge, at least not in any sense we should mean. You have simply revealed a value that was already fixed by a definition.
The fact that Move 37 wasn't explicitly in the training data or the programmers is irrelevant to its status as knowledge. This is true for pi, and for all content created by AI at the time of this writing.
If there had been no AlphaGo and no Move 37, and a human had made that move, as they have similar moves, it would no doubt be called creative genius (as similar moves have). Isn't the above a double standard?
#4683·Tyler MillsOP, about 1 month agoAIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.
Move 37 was not new knowledge. It was the winning choice in that situation before the AI ever existed, because it was deducible from the game's rules and the current board state. It was implicit knowledge, already contained in the system at that time. AlphaGo made it explicit, by finding it, like a search engine, but did not create it. If you calculate the trillionth digit of pi, you haven't created new knowledge, at least not in any sense we should mean. You have simply revealed a value that was already fixed by a definition.
The fact that Move 37 wasn't explicitly in the training data or the programmers is irrelevant to its status as knowledge. This is true for pi, and for all content created by AI at the time of this writing.
Move 37 was not explicitly present in the training data, nor designed by the programmers, and is extremely hard to vary (Deutsch's criterion for good explanations). Was the move present implicitly in the design of the system and/or the training data? Or inexplicitly? Does either of these mean the discovery of the move was non-creative?
Move 37 was not explicitly present in the training data, nor designed by the programmers, and is extremely hard to vary (Deutsch's criterion for good explanations). Was the move present implicitly in the design of the system and/or the training data? Or inexplicitly? Do either of these mean the discovery of the move was non-creative?
#4694·Tyler MillsOP revised about 1 month agoBy this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.
By the latter standard, neither nature nor random number generators are people, which is sensible; nor can nature create any given possible knowledge tractably -- this is true because the fact that all possible knowledge exists is only by way of the multiverse, which is a process that cannot be simulated in its entirety, even by a quantum computer, never mind tractability.