Log in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas.Welcome to Veritula, @netsu. Check out this guide to understand how Veritula works and learn more about rationality. You may also find one of our discussions interesting.
What brings you to Veritula?
Hello, and nice to meet you. Your twit https://x.com/dchackethal/status/2031465139401093501 bring me here.
It seemed relevant to my curiosity about AGI topic, so since I believe in synergy and want to be surrounded more by such context, signed up to the website, just in case.
If you interested to discuss and share some AGI-relevant thoughts, I'm in, just let me know. Not a professional at this topic (just an average software engineer), but investigated topic for quire a while, so, I believe, have something to put on the table. And with LLM came to our live -- the path from the vision to the result -- become notable closer, so, who know, maybe we can really bring something beautiful to life.
And with LLM came to our live -- the path from the vision to [AGI] -- become notable closer…
Have you read any David Deutsch, or listened to any interviews of him? The Beginning of Infinity is very good. You might enjoy chapter 7, where he explains why chatbots don’t bring us closer to AGI.
This article of his is also good.
Let me know what you think of his stance.
The Beginning of Infinity
Heard about it, have not read though. I do not expect AGI from LLM. But it's an awesome tool that helps speed-up learning, research and prototyping. And also all this hype accelerate some money to this topic, which is good. Indeed, if natural intelligence possible, why artificial -- can not? According to the roadmap, I more trust in good old-fashioned symbolic AGI and formal methods. NARS + AIXI + elegant dependent modal or substructural (maybe homotopic) typed programming language with strong meta-theoretic properties, as a carrier of observations, knowledge and judgments + a bit of game theory and evolutionary psychology = this is the way, I believe.
And maybe some tricky computational non-von Neumann architecture to have a nice computational complexity for that (not sure about that, but plausible it make sense to utilize some sort of analogous computations in addition to digital ones).
Since the carrier language is the fundament, I'm stuck significantly in attempts to elaborate this topic deeper. https://x.com/VictorTaelin did a huge progress in this direction, I believe, but I know no details. Disappointingly few people working on this around the world (though, it could change quickly with modern trends). The next small step -- not only representation of knowledge and reasoning about it, but compression and knowledge synthesis (which LLM in it's way doing not so bad, but not so consistently and effective), thru AIXI. Then -- epistemic framework, like NARS, to learn from real-world empiric experience. And only then -- complex game theory/goal setting/economics/interaction with, and interpretation of other's behavior. And only after that -- there's make sense to discuss consciousness, as an introspection of other's observation/interpretation/modeling.
Most ridiculous are takes about so-called Turing test, which, AFAIK, originally was just a bad misogynic joke. Some kind of evolutionary psychology experiments, which people already have set up to study limits of different animals cognitive abilities and abilities to make judgements (e.g. role-playing, like: what ones know about other know about them, and vice versa), or a development of infant children's abilities to interpret concepts like geometry of space, cause and consequence -- would be way better criteria for the AGI system metrics evaluation.
What is awesome about LLM is how it it became easy to do an interdisciplinary meta-analysis.
https://www.sciencedirect.com/science/article/abs/pii/S0003347205807031