Search Ideas
2733 ideas match your query.:
@lola-trimble suggested during a space that a theory is hard to vary if it’s not easy to vary. So the maximum score would be 0, not +1,000 or whatever. In which case ‘hard to vary’ isn’t an endorsement.
Large overlap with idea #3783 – effectively a duplicate. You could revise that idea to include finding “a much better job that allows you the energy for research.”
This seems more like a specific implementation of #3782 than a standalone criticism.
Hmm could you give examples of such addictions between implicit and explicit short-term preferences?
Have you fully used your cash to free time/energy after work?
You may have money for laundry services, cleaning, cooking, and so on. All the other things that take time in your day can be removed with money, giving you space to do research just fine
Leaving the job means more time for research. It also means more time to find a much better job that allows you the energy for research.
Leaving gives space for better balance.
How much time and energy do you really have for research while working? 1hr daily? 2 hours daily? 4 hours daily?
Leaving your job allows for the possibility of consistent high quality research daily.
Consider your current balance of working and research.
Could you cut other activities, keep the job, and increase focus on research?
Deutsch’s stance in my own words:
The distinguishing characteristic between rationality and irrationality is that rationality is the search for good explanations. All progress comes from the search for good explanations. So the distinction between good vs bad explanations is epistemologically fundamental.
A good explanation is hard to vary “while still accounting for what it purports to account for.” (BoI chapter 1 glossary.) A bad explanation is easy to vary.
For example, the Persephone myth as an explanation of the seasons is easy to change without impacting its ability to explain the seasons. You could arbitrarily replace Persephone and other characters and the explanation would still ‘work’. The axis-tilt explanation of the earth, on the other hand, is hard to change without breaking it. You can’t just replace the axis with something else, say.
The quality of a theory is a matter of degrees. The harder it is to change a theory, the better that theory is. When deciding which explanation to adopt, we should “choose between [explanations] according to how good they are…: how hard to vary.” (BoI chapter 9; see similar remark in chapter 8.)
Deutsch should instead name some examples the reader would find easier to disagree with, and then walk them through why some explanations are harder to vary than others.
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.
This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.
Humans use flight-related words even though we can’t fly. From ChatGPT:
- Elevated (thinking, mood, language)
- High-level (ideas, overview)
- Soar (ambitions, prices, imagination)
- Take off (projects, careers)
- Grounded (arguments, people)
- Up in the air (uncertain)
- Overview (“over-see” from above)
- Perspective (originally spatial vantage point)
- Lofty (ideals, goals)
- Aboveboard (open, visible)
- Rise / fall (status, power, ideas)
- Sky-high (expectations, costs)
- Aerial view (conceptual overview)
- Head in the clouds (impractical thinking)
I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.
Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.
So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.
You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.
2) Skepticism is too different from fallibilism to consider it a continuation.
I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.
One rule of thumb financial advisors have told me in the past is to have enough cash on hand to last at least six months without an income.
If you don’t, quitting your job right now could be a bad idea, and your first priority should be to build enough runway.
(This is not financial advice – follow at your own risk.)
You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.
You describe your job as “excruciating”. That’s reason to quit.
No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.