Search

Ideas that are…

Search Ideas


3074 ideas match your query.:

Deutsch should instead name some examples the reader would find easier to disagree with, and then walk them through why some explanations are harder to vary than others.

#3778·Dennis HackethalOP revised about 1 month ago·Original #3748

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.

#3776·Dennis Hackethal revised about 1 month ago·Original #3773·Criticism

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.

#3775·Dirk Meulenbelt revised about 1 month ago·Original #3773·Criticized1

You could spend some time in a cheap country.

#3774·Dirk Meulenbelt, about 1 month ago·Criticized1

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.

#3773·Dirk Meulenbelt, about 1 month ago

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

#3771·Dennis HackethalOP revised about 1 month ago·Original #3770

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

#3770·Dennis HackethalOP, about 1 month ago·CriticismCriticized1

Humans use flight-related words even though we can’t fly. From ChatGPT:

  • Elevated (thinking, mood, language)
  • High-level (ideas, overview)
  • Soar (ambitions, prices, imagination)
  • Take off (projects, careers)
  • Grounded (arguments, people)
  • Up in the air (uncertain)
  • Overview (“over-see” from above)
  • Perspective (originally spatial vantage point)
  • Lofty (ideals, goals)
  • Aboveboard (open, visible)
  • Rise / fall (status, power, ideas)
  • Sky-high (expectations, costs)
  • Aerial view (conceptual overview)
  • Head in the clouds (impractical thinking)
#3769·Dennis HackethalOP, about 1 month ago·Criticism

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.

Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.

#3768·Dennis HackethalOP, about 1 month ago·Criticism

So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.

#3767·Dennis HackethalOP, about 1 month ago·Criticism

2) Skepticism is too different from fallibilism to consider it a continuation.

#3766·Dennis HackethalOP, about 1 month ago·Criticism

I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.

#3765·Dennis HackethalOP, about 1 month ago·Criticism

One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.

If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.

(This is not financial advice – follow at your own risk.)

#3764·Dennis Hackethal, about 1 month ago·CriticismCriticized1

You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.

#3763·Dennis Hackethal, about 1 month ago·Criticism

You describe your job as “excruciating”. That’s reason to quit.

#3762·Dennis Hackethal, about 1 month ago·Criticism

No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.

#3760·Dennis HackethalOP revised about 1 month ago·Original #3710·Criticism

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

#3758·Dennis HackethalOP revised about 1 month ago·Original #3748·Criticized1

Maybe scepticism is fallibilism taken too far?

#3757·Knut Sondre Sæbø, about 1 month ago·Criticized2

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3755·Knut Sondre Sæbø revised about 1 month ago·Original #3751·Criticized2

If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.

#3754·Knut Sondre Sæbø, about 1 month ago

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3752·Knut Sondre Sæbø revised about 1 month ago·Original #3751·CriticismCriticized1

I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3751·Knut Sondre Sæbø, about 1 month ago·CriticismCriticized1

A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.

None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.

I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.

#3750·Dennis HackethalOP, about 1 month ago·Criticism

Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.

#3749·Dennis HackethalOP, about 1 month ago·CriticismCriticized1

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

#3748·Dennis HackethalOP, about 1 month ago·CriticismCriticized1