Search Ideas
1824 ideas match your query.:
Huh, no. I said you found a level where the epistemology is unproblematic to specify and turned that into Veritula. I said the opposite. You misunderstood me.
As I write in my article:
… Popper did formalize/specify much of his epistemology, such as the notions of empirical content and degrees of falsifiability. So why couldn’t Deutsch formalize the steps for finding the quality of a given explanation?
Deutsch’s yardstick applies to computational tasks. It’s not meant for other things. It’s not clear to me that the criterion of democracy is a computational task.
Yes, many ideas fail Deutsch’s yardstick. But so what? That doesn’t make things better.
Veritula and hard to vary are different in this regard. Deutsch claims that ‘hard to vary’ is epistemologically fundamental, that it’s at the core of rationality, and that all progress is made by choosing between explanations based on how hard to vary they are. In other words, he suggests (though only vaguely) a decision-making method.
Veritula has a different decision-making method: one of criticizing ideas and adopting only those with no pending criticisms. That decision-making method is fully specified, with zero vagueness or open questions (that I’m aware of).
Veritula does not pre-specify ahead of time what criticisms people can submit, this is true. But that’s not a problem. It’d be like asking Deutsch to specify ahead of time what explanations people can judge to be easy or hard to vary. That’s not the specification that’s lacking with hard to vary.
The ancient Greeks might have found the Persephone myth extremely hard to vary, eg due to cultural constraints. They wouldn’t have agreed that one could just swap out Persephone for someone else.
But then the ease with which a criticism could be varied might have no effect on its parent. So why even bother having a notion of ‘easiness to vary’ at that point?
Even so, if a criticism gets score -10, that will push the parent theory’s score above 0.
Large overlap with idea #3783 – effectively a duplicate. You could revise that idea to include finding “a much better job that allows you the energy for research.”
This seems more like a specific implementation of #3782 than a standalone criticism.
Hmm could you give examples of such addictions between implicit and explicit short-term preferences?
How much time and energy do you really have for research while working? 1hr daily? 2 hours daily? 4 hours daily?
Leaving your job allows for the possibility of consistent high quality research daily.
Deutsch should instead name some examples the reader would find easier to disagree with, and then walk them through why some explanations are harder to vary than others.
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
Humans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
2) Skepticism is too different from fallibilism to consider it a continuation.
I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.
You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.
You describe your job as “excruciating”. That’s reason to quit.
No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.
If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.