Are AI models narrowly creative?

Showing only ideas leading to #4687.

See full discussion​·​See most recent related ideas
  Log in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas.

Discussions can branch out indefinitely. You may need to scroll sideways.
Tyler Mills’s avatar

AIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.

Edwin de Wit’s avatar

This seems to me to be the same distinction that Deutsch and others have made between the genetic evolution we can simulate through evolutionary algorithms and the kind we actually observe in nature. I think it would be helpful to investigate evolutionary algorithms a bit further if you want to develop a clear distinction. This is how I describe it in my book:

There are several mechanisms that genes use to create variants, including sex, mutation, gene flow, and genetic drift, all of which appear to introduce change randomly. But we now know it cannot be entirely random. Something more is shaping what gets trialed, because when we model and simulate evolution using random changes, we never see the sort of novelties that arose in nature. We see optimization. We see exploitation. We see organisms become better at using resources they already use. But we never see a genuinely new use of a resource emerge. A fin may become better at swimming, but it does not become a limb. A metabolism may become more efficient, but it does not open up an entirely new biological pathway. And yet the natural world is full of exactly such extraordinary adaptations.

👍Tyler Mills’s avatar
Tyler Mills’s avatar

I keep returning to the notion of the space or domain in which simulated evolution so far operates in. It seems like we can say that current sim'd evolution can discover new knowledge via conjecture and criticism, but it is always bound by a domain predefined by fitness functions, automatic evaluators and so on, even if that domain itself contains many subdomains.

Then we can say that in nature, and in the minds of people, there is no externally defined space in which exploration is happening; the space is also evolving, also subject to criticism. I suspect this is part of how open-endedness comes about.

But the immediate question here was how to explain why AI is or is not "creative". Saying AIs are "narrowly creative" seems it could work, or saying they are creative within a fixed domain. The common intuition I think is that current AIs are "truly" creative, and I would say this is because the predefined domain (of LLMs, for instance) is gigantic, being sculpted by an internet-sized training corpus. But I suppose we should argue that "true creativity" means universal creativity.

I was curious if there are criticisms of the argument that current AI does legitimately create new knowledge.