Tyler Mills’s avatar

Tyler Mills

@tyler-mills​·​Joined Jan 2026​·​Ideas
Log in or sign up to follow Tyler or post on their wall.
  Tyler Mills revised idea #4692 and marked it as a criticism.

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

  Tyler Mills revised criticism #4691 and unmarked it as a criticism.

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be a standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

  Tyler Mills addressed criticism #4690.

Nature does have universal creativity; it can generate any possible knowledge. And all possible knowledge exists somewhere in reality.

#4690​·​Tyler MillsOP, about 1 month ago

By this standard, a random number generator has universal creativity as well, and is therefore a person. So there must be standard for personhood other than: able to generate any possible explanation. Such as: can do that tractably.

  Tyler Mills addressed criticism #4689.

But nature created genetic knowledge from nothing. So this is an example of something which does not have universal creativity which created knowledge ex nihilo.

#4689​·​Tyler MillsOP, about 1 month ago

Nature does have universal creativity; it can generate any possible knowledge. And all possible knowledge exists somewhere in reality.

  Tyler Mills criticized idea #4688.

This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.

#4688​·​Tyler MillsOP, about 1 month ago

But nature created genetic knowledge from nothing. So this is an example of something which does not have universal creativity which created knowledge ex nihilo.

  Tyler Mills commented on idea #4684.

Since evolution created genetic knowledge from nothing, it can be said to have the same "narrow creativity" as AI. The confusion over whether AI "is creative" can be resolved by saying that it is, but only narrowly (like evolution), and that the creativity defining people is universal, not limited to any domain. AI creates knowledge in domains it was designed for; AGI can create knowledge in all possible domains, each of which it designs itself.

#4684​·​Tyler MillsOP, about 1 month ago

This also admits of the distinction between AI and AGI (and "universal creativity") as being whether the system is capable of creating knowledge ex nihilo, as argued by Deutsch. Only universal creativity could create knowledge from nothing. Bounded creativity must start with something.

  Tyler Mills commented on idea #4686.

This seems to me to be the same distinction that Deutsch and others have made between the genetic evolution we can simulate through evolutionary algorithms and the kind we actually observe in nature. I think it would be helpful to investigate evolutionary algorithms a bit further if you want to develop a clear distinction. This is how I describe it in my book:

There are several mechanisms that genes use to create variants, including sex, mutation, gene flow, and genetic drift, all of which appear to introduce change randomly. But we now know it cannot be entirely random. Something more is shaping what gets trialed, because when we model and simulate evolution using random changes, we never see the sort of novelties that arose in nature. We see optimization. We see exploitation. We see organisms become better at using resources they already use. But we never see a genuinely new use of a resource emerge. A fin may become better at swimming, but it does not become a limb. A metabolism may become more efficient, but it does not open up an entirely new biological pathway. And yet the natural world is full of exactly such extraordinary adaptations.

#4686​·​Edwin de Wit, about 1 month ago

I keep returning to the notion of the space or domain in which simulated evolution so far operates in. It seems like we can say that current sim'd evolution can discover new knowledge via conjecture and criticism, but it is always bound by a domain predefined by fitness functions, automatic evaluators and so on, even if that domain itself contains many subdomains.

Then we can say that in nature, and in the minds of people, there is no externally defined space in which exploration is happening; the space is also evolving, also subject to criticism. I suspect this is part of how open-endedness comes about.

But the immediate question here was how to explain why AI is or is not "creative". Saying AIs are "narrowly creative" seems it could work, or saying they are creative within a fixed domain. The common intuition I think is that current AIs are "truly" creative, and I would say this is because the predefined domain (of LLMs, for instance) is gigantic, being sculpted by an internet-sized training corpus. But I suppose we should argue that "true creativity" means universal creativity.

I was curious if there are criticisms of the argument that current AI does legitimately create new knowledge.

  Tyler Mills commented on idea #4683.

AIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.

#4683​·​Tyler MillsOP, about 1 month ago

Move 37 was not explicitly present in the training data, nor designed by the programmers, and is extremely hard to vary (Deutsch's criterion for good explanations). Was the move present implicitly in the design of the system and/or the training data? Or inexplicitly? Does either of these mean the discovery of the move was non-creative?

  Tyler Mills commented on idea #4683.

AIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.

#4683​·​Tyler MillsOP, about 1 month ago

Since evolution created genetic knowledge from nothing, it can be said to have the same "narrow creativity" as AI. The confusion over whether AI "is creative" can be resolved by saying that it is, but only narrowly (like evolution), and that the creativity defining people is universal, not limited to any domain. AI creates knowledge in domains it was designed for; AGI can create knowledge in all possible domains, each of which it designs itself.

  Tyler Mills started a discussion titled ‘Are AI models narrowly creative?’.

Recent conversations have revealed that I cannot argue against the notion that current AI systems create new knowledge (and so are creative in some domain). Yet, David Deutsch has argued that creativity is a binary property of programs, and those that have it are people (including AGIs), who can create all possible explanatory knowledge. I am not of the mind that current AI is AGI. So I will try to iron all this out, here.

The discussion starts with idea #4683.

AIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.

  Tyler Mills addressed criticism #4680.

Computational Universality only implies that all computable programs can be run by UCs. But what is relevant here is what programs can be reached by a given program -- synthesized by it. A UC with knowledge that only contains objectively whirlpool-scale conjectures (resulting from external stimulus or not) will not have niches relating to molecule-scale theories. Such theories solve no problems for it. So there will be no selection for those theories, so evolution will not develop them. Molecule-scale theories constitute intractable niches for the whirlpool system. They are still possible to run, if present, but that is not what's at issue. Observer Theory is correct if it is saying that the theories of reality developed by systems will depend on the abstraction level of their knowledge with respect to reality.

#4680​·​Tyler Mills, about 1 month ago

Of course it's true that a system confined to a given abstraction will only evolve theories of that scale. But a person can operate at all computable levels of abstraction. The growth of knowledge by people (e.g. Relativity) would only have happened if people can vary their abstractions arbitrarily, because Relativity solves no problems at any one given level of abstraction, but across many. Observer Theory might be right for certain systems, but is wrong for people.

  Tyler Mills addressed criticism #4679.

The aliasing that happens with the flipbook is a consequence of an imaging system. To suggest that theories/programs/explanations would be subject to aliasing in the same way suggests that they are derived from observation, which is Empiricism (false). They are created from mutation and criticism of existing knowledge, and this process can be performed by all universal computers. Any explanation/rendering/program runnable on one UC is runnable on all, so two observers can always converge to the same laws of physics.

#4679​·​Tyler Mills, about 1 month ago

Computational Universality only implies that all computable programs can be run by UCs. But what is relevant here is what programs can be reached by a given program -- synthesized by it. A UC with knowledge that only contains objectively whirlpool-scale conjectures (resulting from external stimulus or not) will not have niches relating to molecule-scale theories. Such theories solve no problems for it. So there will be no selection for those theories, so evolution will not develop them. Molecule-scale theories constitute intractable niches for the whirlpool system. They are still possible to run, if present, but that is not what's at issue. Observer Theory is correct if it is saying that the theories of reality developed by systems will depend on the abstraction level of their knowledge with respect to reality.

  Tyler Mills criticized idea #4676.

I'm realizing this is very related to Stephen Wolfram's "Observer Theory", which is interesting, but sounds worryingly relativist to me at times. Something like: Different observers will coarse-grain different laws of physics than the ones we have, for the same reason that the flipbook appears to have motion to us, but not to an observer viewing through a high-speed camera. Debating with LLMs about how that seems to violate computational universality has left me frustrated.

#4676​·​Tyler Mills, about 1 month ago

The aliasing that happens with the flipbook is a consequence of an imaging system. To suggest that theories/programs/explanations would be subject to aliasing in the same way suggests that they are derived from observation, which is Empiricism (false). They are created from mutation and criticism of existing knowledge, and this process can be performed by all universal computers. Any explanation/rendering/program runnable on one UC is runnable on all, so two observers can always converge to the same laws of physics.

  Tyler Mills commented on idea #4675.

Better example maybe: A whirlpool in water only exists to an observer that can create whirlpools in its VR. If the observer only has molecule-scale abstraction, it cannot coarse-grain, so there are no whirlpools for it, or explanations in terms of them. (Such a system also cannot be a person, because a person can create all possible explanations).

#4675​·​Tyler Mills, about 1 month ago

I'm realizing this is very related to Stephen Wolfram's "Observer Theory", which is interesting, but sounds worryingly relativist to me at times. Something like: Different observers will coarse-grain different laws of physics than the ones we have, for the same reason that the flipbook appears to have motion to us, but not to an observer viewing through a high-speed camera. Debating with LLMs about how that seems to violate computational universality has left me frustrated.

  Tyler Mills commented on idea #4615.

Is all emergence relative? I notice that when a flipbook or a zoetrope gives rise to the perceived motion of still images when they're rapidly changed, that is a result of aliasing on the part of the observer. Is this true in all cases of emergence, perceptual and otherwise..?

#4615​·​Tyler Mills, about 2 months ago

Better example maybe: A whirlpool in water only exists to an observer that can create whirlpools in its VR. If the observer only has molecule-scale abstraction, it cannot coarse-grain, so there are no whirlpools for it, or explanations in terms of them. (Such a system also cannot be a person, because a person can create all possible explanations).

  Tyler Mills commented on idea #4623.

Can you say more about what you mean by “relative”? I agree about the flipbook example, but the term “relative” is throwing me off a bit here.

#4623​·​Dennis Hackethal, about 2 months ago

I'm wondering if what is true for the flip book is true for many phenomena, or all. Is the emergence of an autonomous feature always a function of ("relative to") what is observing/explaining/attempting to reproduce the system?

  Tyler Mills posted idea #4615.

Is all emergence relative? I notice that when a flipbook or a zoetrope gives rise to the perceived motion of still images when they're rapidly changed, that is a result of aliasing on the part of the observer. Is this true in all cases of emergence, perceptual and otherwise..?

  Tyler Mills addressed criticism #2666.

‘Veritula’ is a difficult name, people don’t know how to spell or pronounce it. They can’t easily remember it.

#2666​·​Dennis HackethalOP revised 6 months ago

There's something to be said for a degree of complexity and novelty to a name. It lends air of thoughtfulness, and could spark curiosity in potential new users.

  Tyler Mills commented on criticism #4356.

'Veritula' is not a difficult name as compared to other highly successful explanatory enterprises, like 'Veritasium.'

#4356​·​Tyler Mills, 2 months ago

See also: "Kurzgesagt – In a Nutshell", the highly successful educational YT channel. I know people who are big fans, and yet can't pronounce the name correctly.

  Tyler Mills addressed criticism #2666.

‘Veritula’ is a difficult name, people don’t know how to spell or pronounce it. They can’t easily remember it.

#2666​·​Dennis HackethalOP revised 6 months ago

'Veritula' is not a difficult name as compared to other highly successful explanatory enterprises, like 'Veritasium.'

  Tyler Mills commented on criticism #4094.

You could think up a design for a self-replicating machine and then build it. Assuming you made no critical mistakes, you have made a self-replicator that hasn’t self-replicated yet.

It is considered a replicator based on what it can do, rather than on what it has done.

#4094​·​Benjamin Davies, 3 months ago

Agreed. Thanks.

  Tyler Mills posted idea #4043.

How many times need something be replicated before the term 'replicator' should apply? If it's a matter of reliability, what defines reliable? Is "replicator-ness" on a continuum?

  Tyler Mills commented on idea #4041.

What is the distinction between replication and self-replication?

#4041​·​Tyler MillsOP revised 3 months ago

The distinction is where the knowledge for performing the replication is physically located.

Replication is: an entity in an environment being recreated or copied because of the environment (which can include the entity, as in the case of self-replication). The general case.

Self-replication is the special case of replication where: an entity is replicated as caused by aspects of itself alone. The knowledge for its replication is within it.

  Tyler Mills revised idea #4040.

What is the distinction between replication and self-replication? Does anything "truly" self-replicate?

What is the distinction between replication and self-replication?

  Tyler Mills started a discussion titled ‘Is Self-Replication Required for the Growth of Knowledge? ’.

Either in biological evolution, or the evolution of ideas (programs) in a mind, is a mechanism for self-replication required for knowledge to grow? That is, do entities within the system need to be able to recreate themselves, or cause themselves to be recreated, for there to be progress? Why?

The discussion starts with idea #4040.

What is the distinction between replication and self-replication? Does anything "truly" self-replicate?