Search

Ideas that are…

Search Ideas


2771 ideas match your query.:

You could spend some time in a cheap country.

#3774·Dirk Meulenbelt, 9 days ago·Criticized1

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.

#3773·Dirk Meulenbelt, 9 days ago

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

#3771·Dennis HackethalOP revised 9 days ago·Original #3770

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

#3770·Dennis HackethalOP, 9 days ago·CriticismCriticized1

Humans use flight-related words even though we can’t fly. From ChatGPT:

  • Elevated (thinking, mood, language)
  • High-level (ideas, overview)
  • Soar (ambitions, prices, imagination)
  • Take off (projects, careers)
  • Grounded (arguments, people)
  • Up in the air (uncertain)
  • Overview (“over-see” from above)
  • Perspective (originally spatial vantage point)
  • Lofty (ideals, goals)
  • Aboveboard (open, visible)
  • Rise / fall (status, power, ideas)
  • Sky-high (expectations, costs)
  • Aerial view (conceptual overview)
  • Head in the clouds (impractical thinking)
#3769·Dennis HackethalOP, 9 days ago·Criticism

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.

Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.

#3768·Dennis HackethalOP, 9 days ago·Criticism

So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.

#3767·Dennis HackethalOP, 9 days ago·Criticism

2) Skepticism is too different from fallibilism to consider it a continuation.

#3766·Dennis HackethalOP, 9 days ago·Criticism

I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.

#3765·Dennis HackethalOP, 9 days ago·Criticism

One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.

If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.

(This is not financial advice – follow at your own risk.)

#3764·Dennis Hackethal, 9 days ago·CriticismCriticized1

You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.

#3763·Dennis Hackethal, 9 days ago·Criticism

You describe your job as “excruciating”. That’s reason to quit.

#3762·Dennis Hackethal, 9 days ago·Criticism

No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.

#3760·Dennis HackethalOP revised 9 days ago·Original #3710·Criticism

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

#3758·Dennis HackethalOP revised 9 days ago·Original #3748·Criticized1

Maybe scepticism is fallibilism taken too far?

#3757·Knut Sondre Sæbø, 10 days ago·Criticized2

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3755·Knut Sondre Sæbø revised 10 days ago·Original #3751·Criticized2

If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.

#3754·Knut Sondre Sæbø, 10 days ago

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3752·Knut Sondre Sæbø revised 10 days ago·Original #3751·CriticismCriticized1

I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

#3751·Knut Sondre Sæbø, 10 days ago·CriticismCriticized1

A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.

None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.

I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.

#3750·Dennis HackethalOP, 10 days ago·Criticism

Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.

#3749·Dennis HackethalOP, 10 days ago·CriticismCriticized1

Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.

#3748·Dennis HackethalOP, 10 days ago·CriticismCriticized1

Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.

#3747·Dennis HackethalOP, 10 days ago·Criticism

Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.

If that’s too long, watch ‘The Simplest Thing in the World’

#3746·Dennis Hackethal, 10 days ago

You’re right, my mistake.

#3745·Dennis HackethalOP, 10 days ago