Activity Feed
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.
#3613·Tyler MillsOP, 10 days agoA hiatus would incur a relatively heavy cost: the cost of living + the opportunity cost of lost salary. Earning money as quickly as possible, as early as possible, is important for long-term financial success.
You could spend some time in a cheap country.
#3611·Tyler MillsOP, 10 days agoA hiatus would create a "resume gap," weakening hireability in the field. This is to be avoided, but only assuming working in the field is itself desirable, which may not be the case, here, unless better opportunities arise (roles allowing more contact with physics, math and design -- i.e. "engineering"!).
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
#3743·Knut Sondre Sæbø, 6 days agoThat was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
#3654·Knut Sondre Sæbø revised 8 days agoThis is also borrowed from cognitive science. But what I meant was to point to the fact that there are “pre-conceptual” models, desires, attentional salience etc. that impinge on and filter input to conscious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to “move around” in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we’re close to the truth, we understand, we grasp a concept, we arrive at a conclusion.
Humans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
#3755·Knut Sondre Sæbø revised 6 days agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
#3755·Knut Sondre Sæbø revised 6 days agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
2) Skepticism is too different from fallibilism to consider it a continuation.
I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.
#3639·Tyler MillsOP, 9 days agoOption 2: Go on hiatus from the day job/career, and focus on creative pursuits and research, full-time, for some number of months (duration perhaps depending on job opportunities).
One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.
If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.
(This is not financial advice – follow at your own risk.)
#3638·Tyler MillsOP, 9 days agoOption 1: Continue working the day job and balancing the other pursuits on the side.
You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.
#3638·Tyler MillsOP, 9 days agoOption 1: Continue working the day job and balancing the other pursuits on the side.
You describe your job as “excruciating”. That’s reason to quit.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
#3705·Dennis HackethalOP, 7 days agoIsn't every theory infinitely underspecified ?
This stance is presumably a version of the epistemological cynicism I identify here.
Maybe scepticism is fallibilism taken too far?
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3752·Knut Sondre Sæbø revised 6 days agoI think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3733·Dennis HackethalOP, 6 days agoOr it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
#3749·Dennis HackethalOP, 6 days agoMaybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.
None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.
I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.
#3707·Dennis HackethalOP, 7 days agoDeutsch contradicts his yardstick for understanding a computational task. He says that you haven’t understood a computational task if you can’t program it. His method of decision-making based on finding good explanations is a computational task. He can’t program it, so he hasn’t understood it.
Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.
#3747·Dennis HackethalOP, 6 days agoPersephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
#3726·Dennis HackethalOP revised 7 days agoDeutsch’s stance in my own words:
The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs bad explanations is epistemologically fundamental.
A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.
For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.
The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chatper 9; see similar remark in chapter 8.)
Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.
Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.
If that’s too long, watch ‘The Simplest Thing in the World’