Search Ideas
2771 ideas match your query.:
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
Humans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
2) Skepticism is too different from fallibilism to consider it a continuation.
I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.
One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.
If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.
(This is not financial advice – follow at your own risk.)
You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.
You describe your job as “excruciating”. That’s reason to quit.
No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
I think that depend on the "embodiment" of the AGI. That is how it is like to be that AGI, and how it's normal world looks like. A bat (If they where people) would probably prefer different metaphors than for a human. Humans are very visual, which makes spacial feutures very salient for us. Metaphors are useful because they take advantage of already salient aspects for a person to view other things. So things that is are immidately salient for the person, has more potency as a metaphor.
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears. A bat (if it were a person) would probably prefer different metaphors than a human would. Humans are very visual, which makes spatial features very salient to us. Metaphors work because they leverage already-salient aspects of experience to illuminate other things. So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.
None of this means a heuristic couldn’t be programmed. On the contrary, heuristics sound easier to program than full-fledged, ‘proper’ algorithms.
I’d be happy to see some pseudo-code that uses workarounds/heuristics. That’d be a fine starting point.
Maybe Deutsch just means hard to vary as a heuristic, not as a full-fledged decision-making algorithm.
Deutsch should instead name some examples the reader would easier to disagree with, and then walk them through why some explanations are harder to vary than others.
Persephone vs axis tilt is low-hanging fruit. The reader finds it easy to disagree with the Persephone myth and easy to agree with the axis tilt, from cultural background alone. But that doesn’t mean there’s anything to hard to vary.
Read The Fountainhead by Ayn Rand. That should give you some fuel to move forward.
If that’s too long, watch ‘The Simplest Thing in the World’