Activity Feed

  Dirk Meulenbelt revised idea #3773.

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.

  Dirk Meulenbelt commented on idea #3613.

A hiatus would incur a relatively heavy cost: the cost of living + the opportunity cost of lost salary. Earning money as quickly as possible, as early as possible, is important for long-term financial success.

#3613·Tyler MillsOP, 10 days ago

You could spend some time in a cheap country.

  Dirk Meulenbelt commented on idea #3611.

A hiatus would create a "resume gap," weakening hireability in the field. This is to be avoided, but only assuming working in the field is itself desirable, which may not be the case, here, unless better opportunities arise (roles allowing more contact with physics, math and design -- i.e. "engineering"!).

#3611·Tyler MillsOP, 10 days ago

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.

  Dennis Hackethal revised criticism #3770 and unmarked it as a criticism.

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

  Dennis Hackethal criticized idea #3743.

That was autocorrect from my cellphone. Mye means alot in Norwegian. Not a good idea to have autocorrect on when you're writing in two languages..

#3743·Knut Sondre Sæbø, 6 days ago

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

  Dennis Hackethal criticized idea #3654.

This is also borrowed from cognitive science. But what I meant was to point to the fact that there are “pre-conceptual” models, desires, attentional salience etc. that impinge on and filter input to conscious cognition. An example is how brain regions originally used for moving the body through 3D space are repurposed cognitively to “move around” in idea-space. Some anecdotal evidence for this: notice how many movement metaphors structure propositional thinking. We say we’re close to the truth, we understand, we grasp a concept, we arrive at a conclusion.

#3654·Knut Sondre Sæbø revised 8 days ago

Humans use flight-related words even though we can’t fly. From ChatGPT:

  • Elevated (thinking, mood, language)
  • High-level (ideas, overview)
  • Soar (ambitions, prices, imagination)
  • Take off (projects, careers)
  • Grounded (arguments, people)
  • Up in the air (uncertain)
  • Overview (“over-see” from above)
  • Perspective (originally spatial vantage point)
  • Lofty (ideals, goals)
  • Aboveboard (open, visible)
  • Rise / fall (status, power, ideas)
  • Sky-high (expectations, costs)
  • Aerial view (conceptual overview)
  • Head in the clouds (impractical thinking)
  Dennis Hackethal criticized idea #3755.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3755·Knut Sondre Sæbø revised 6 days ago

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.

Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.

  Dennis Hackethal criticized idea #3755.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3755·Knut Sondre Sæbø revised 6 days ago

So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.

  Dennis Hackethal criticized idea #3757.

Maybe scepticism is fallibilism taken too far?

#3757·Knut Sondre Sæbø, 6 days ago

2) Skepticism is too different from fallibilism to consider it a continuation.

  Dennis Hackethal criticized idea #3757.

Maybe scepticism is fallibilism taken too far?

#3757·Knut Sondre Sæbø, 6 days ago

I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.

  Dennis Hackethal criticized idea #3639.

Option 2: Go on hiatus from the day job/career, and focus on creative pursuits and research, full-time, for some number of months (duration perhaps depending on job opportunities).

#3639·Tyler MillsOP, 9 days ago

One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.

If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.

(This is not financial advice – follow at your own risk.)

  Dennis Hackethal criticized idea #3638.

Option 1: Continue working the day job and balancing the other pursuits on the side.

#3638·Tyler MillsOP, 9 days ago

You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.

  Dennis Hackethal criticized idea #3638.

Option 1: Continue working the day job and balancing the other pursuits on the side.

#3638·Tyler MillsOP, 9 days ago

You describe your job as “excruciating”. That’s reason to quit.

  Dennis Hackethal revised criticism #3710.

No, see #3706. I’m open to user input (within reason). That covers any creative parts. The non-creative parts can be automated by definition.

No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.

  Dennis Hackethal revised criticism #3748 and unmarked it as a criticism.

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

  Knut Sondre Sæbø commented on idea #3705.

Isn't every theory infinitely underspecified ?

This stance is presumably a version of the epistemological cynicism I identify here.

#3705·Dennis HackethalOP, 7 days ago

Maybe scepticism is fallibilism taken too far?

  Knut Sondre Sæbø revised criticism #3752 and unmarked it as a criticism.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

  Knut Sondre Sæbø commented on criticism #3752.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3752·Knut Sondre Sæbø revised 6 days ago

If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.

  Knut Sondre Sæbø revised criticism #3751.

I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

  Knut Sondre Sæbø addressed criticism #3733.

Or it might, who knows? An AGI, just like humans, would move around in the world and discover that metaphors are useful, so it might as well use spatial metaphors. If it did, that would be due to convergent evolution of ideas. And even if it didn’t, that could just be because the ideas didn’t converge, not because AGIs don’t have brains.

#3733·Dennis HackethalOP, 6 days ago

I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

  Dennis Hackethal addressed criticism #3749.

Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.

#3749·Dennis HackethalOP, 6 days ago

A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.

None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.

I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.

  Dennis Hackethal addressed criticism #3707.

Deutsch contradicts his yardstick for understanding a computational task. He says that you haven’t understood a computational task if you can’t program it. His method of decision-making based on finding good explanations is a computational task. He can’t program it, so he hasn’t understood it.

#3707·Dennis HackethalOP, 7 days ago

Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.

  Dennis Hackethal addressed criticism #3747.

Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.

#3747·Dennis HackethalOP, 6 days ago

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

  Dennis Hackethal criticized idea #3726.

Deutsch’s stance in my own words:

The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs bad explanations is epistemologically fundamental.

A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.

For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.

The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chatper 9; see similar remark in chapter 8.)

#3726·Dennis HackethalOP revised 7 days ago

Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.

  Dennis Hackethal submitted idea #3746.

Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.

If that’s too long, watch ‘The Simplest Thing in the World’