Search

Ideas that are…

Search Ideas


1824 ideas match your query.:

Huh, no. I said you found a level where the epistemology is unproblematic to specify and turned that into Veritula. I said the opposite. You misunderstood me.

#3801·Dirk Meulenbelt, 23 days ago·Criticism

As I write in my article:

… Popper did formalize/specify much of his epistemology, such as the notions of empirical content and degrees of falsifiability. So why couldn’t Deutsch formalize the steps for finding the quality of a given explanation?

#3800·Dennis HackethalOP, 23 days ago·Criticism

Deutsch’s yardstick applies to computational tasks. It’s not meant for other things. It’s not clear to me that the criterion of democracy is a computational task.

#3799·Dennis HackethalOP, 23 days ago·Criticism

Yes, many ideas fail Deutsch’s yardstick. But so what? That doesn’t make things better.

#3798·Dennis HackethalOP, 23 days ago·Criticism

Veritula and hard to vary are different in this regard. Deutsch claims that ‘hard to vary’ is epistemologically fundamental, that it’s at the core of rationality, and that all progress is made by choosing between explanations based on how hard to vary they are. In other words, he suggests (though only vaguely) a decision-making method.

Veritula has a different decision-making method: one of criticizing ideas and adopting only those with no pending criticisms. That decision-making method is fully specified, with zero vagueness or open questions (that I’m aware of).

Veritula does not pre-specify ahead of time what criticisms people can submit, this is true. But that’s not a problem. It’d be like asking Deutsch to specify ahead of time what explanations people can judge to be easy or hard to vary. That’s not the specification that’s lacking with hard to vary.

#3796·Dennis HackethalOP, 23 days ago·Criticism

The ancient Greeks might have found the Persephone myth extremely hard to vary, eg due to cultural constraints. They wouldn’t have agreed that one could just swap out Persephone for someone else.

#3794·Dennis HackethalOP, 23 days ago·Criticism

But then the ease with which a criticism could be varied might have no effect on its parent. So why even bother having a notion of ‘easiness to vary’ at that point?

#3793·Dennis HackethalOP, 23 days ago·Criticism

Even so, if a criticism gets score -10, that will push the parent theory’s score above 0.

#3791·Dennis HackethalOP, 23 days ago·Criticism

Large overlap with idea #3783 – effectively a duplicate. You could revise that idea to include finding “a much better job that allows you the energy for research.”

#3788·Dennis Hackethal, 23 days ago·Criticism

This seems more like a specific implementation of #3782 than a standalone criticism.

#3787·Dennis Hackethal, 23 days ago·Criticism

Hmm could you give examples of such addictions between implicit and explicit short-term preferences?

#3786·Erik Orrje, 23 days ago

How much time and energy do you really have for research while working? 1hr daily? 2 hours daily? 4 hours daily?

Leaving your job allows for the possibility of consistent high quality research daily.

#3783·Zakery Mizell, 24 days ago·Criticism

Deutsch should instead name some examples the reader would find easier to disagree with, and then walk them through why some explanations are harder to vary than others.

#3778·Dennis HackethalOP revised 24 days ago·Original #3748

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researcher.

#3776·Dennis Hackethal revised 24 days ago·Original #3773·Criticism

This is solved by actively doing some visible stuff you'd want to do anyway as an AGI researchers.

#3773·Dirk Meulenbelt, 24 days ago

I don’t know what kind of phone you use, but iPhone keyboards have support for multiple languages. You can switch between them. Should make false autocorrects rarer.

#3771·Dennis HackethalOP revised 24 days ago·Original #3770

Humans use flight-related words even though we can’t fly. From ChatGPT:

  • Elevated (thinking, mood, language)
  • High-level (ideas, overview)
  • Soar (ambitions, prices, imagination)
  • Take off (projects, careers)
  • Grounded (arguments, people)
  • Up in the air (uncertain)
  • Overview (“over-see” from above)
  • Perspective (originally spatial vantage point)
  • Lofty (ideals, goals)
  • Aboveboard (open, visible)
  • Rise / fall (status, power, ideas)
  • Sky-high (expectations, costs)
  • Aerial view (conceptual overview)
  • Head in the clouds (impractical thinking)
#3769·Dennis HackethalOP, 24 days ago·Criticism

I think that depends on the "embodiment" of the AGI; that is, what it's like to be that AGI and how its normal world appears.

Yeah maybe but again (#3693), those are parochial factors, starting points. Ideas are more important. AGI could just switch bodies rapidly anyway.

#3768·Dennis HackethalOP, 24 days ago·Criticism

So to train an AGI, I would think it's more useful for that AGI to leverage the salient aspects that are pre-given.

You don’t “train” an AGI any more than you’d “train” a child. We’re not talking about dogs here.

#3767·Dennis HackethalOP, 24 days ago·Criticism

2) Skepticism is too different from fallibilism to consider it a continuation.

#3766·Dennis HackethalOP, 24 days ago·Criticism

I don’t think so, for two reasons. 1) Skepticism came long before Popper’s fallibilism.

#3765·Dennis HackethalOP, 24 days ago·Criticism

You’re young. Now’s the time to take (educated, calculated) risks. Even if quitting turns out to be a mistake, you have all the time in the world to correct the mistake and recover. You can always find some day job somewhere. But you may not always be able to pursue your passion.

#3763·Dennis Hackethal, 24 days ago·Criticism

You describe your job as “excruciating”. That’s reason to quit.

#3762·Dennis Hackethal, 24 days ago·Criticism

No, see #3706. I’m open to user input (within reason). That covers creative parts. The non-creative parts can be automated by definition.

#3760·Dennis HackethalOP revised 24 days ago·Original #3710·Criticism

If this is the case, it would make sense to make AGI as similar to ourselves as possible, so AGI can use our pre-existing knowledge more directly.

#3754·Knut Sondre Sæbø, 25 days ago