Search Ideas
“Justification without finality is fake.” (#4391) In other words, if it doesn’t claim to be final, it’s not justification.
Implemented as of ecc72ff. Check your profile.
This is the first idea posted straight to my profile, outside of discussions.
Dollar-Cost Averaging
Dollar-cost averaging (DCA) is when you invest a fixed amount on a regular basis regardless of market developments.
This practice can work well long term for assets that reflect the value of the entire stock market (or a big part of it).
Long term, we can expect the stock market as a whole to gain value. So if you invest part of your income every month, say, then your position will grow in the long run.
In the meantime, you get to reduce risk by not investing all your money at once. You also get to react to developments that affect the stock market and can decide to interrupt your investment schedule. But again, the idea is typically to invest regardless of market developments. I personally like ‘boring’ investment strategies, meaning strategies that are automated and reliable.
… regardless of market developments.
vs
You also get to react to developments …
A contradiction.
But this sounds like you’re saying justificationism is necessarily the same as foundationalism. Whereas in #4392 you agreed it’s only a kind of justifiationism.
I’m not sure foundationalism and justificationism are quite the same thing.
You are right. Foundationalism is a kind of justificationism. The secure foundation is a kind of justification.
I will have to rewrite this in my article.
Indeed. Justification without finality is fake.
"X is true because of Y, but we can discuss Y"
Is functionally the same as
"X is true and we can discuss why"
The same passage quoted in #4388 (the first one) just links to an entire page with no quotes or section information. That makes verifying the information harder: readers would have to read the entire page.
Sources should be specific: either give a verbatim quote or link to a highlight.
The same passage quoted in #4388 (the first one) links to a secondary source on Popper. Secondary sources on Popper are usually bad. Use a primary source – something Popper himself said.
The article says:
A follower of the philosopher Karl Popper would object: isn’t this just foundationalism in disguise? … Popper showed that’s impossible: any justification needs a deeper justification, and that one needs another, so you either chase reasons forever or stop at one you can’t defend.
I didn’t read the entire linked page, but based on a word search for ‘regress’, it attributes the infinite-regress problem to Hans Albert, not Popper:
[Albert] argues that any attempt at justification faces a three-pronged difficulty that is traceable to Agrippa: One alternative leads to an infinite regress as one seeks to prove one assumption but then needs to assume some new one…
For a tiebreaker, consider this Wiktionary definition of justificationism (links removed):
An approach that regards the justification of a claim as primary, while the claim itself is secondary…
Since this quote doesn’t mention finality, it sounds more in line with BoI.
Just because Dirk’s notion of justificationism breaks with BoI’s doesn’t mean Dirk is wrong. BoI could be wrong.
The article says:
[Justificationism] is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
This definition breaks with BoI. The glossary from ch. 1 says:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Note that this second quote says nothing about finality “beyond question”.
Dirk writes:
Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
I’m not sure foundationalism and justificationism are quite the same thing.
From BoI ch. 1 glossary:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.
I could see foundationalism being a flavor of justificationism, but not the same thing.
Dirk writes:
Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
I’m not sure foundationalism and justificationism are quite the same thing.
From BoI ch. 1 glossary:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.
I could see foundationalism being a flavor of justificationism, but not the same thing.
As written, a limitation is placed on users, not on Veritula. I want to set expectations and protect my time by preventing an obligation to have extended discussions over moderation decisions. I remain free to make exceptions.
I have zero experience on the drug market, but I think it’s fair to assume that companies that want to get business by inhibiting people’s creativity rather than enhancing it don’t particularly care about consent.
I don’t expect honest advertising from such people. I expect trickery, not consent.
The same decision may be appealed only once.
Does this not inhibit error correction? Why not just leave this to the discretion of Veritula, on a case by case basis?
Predatory businesses can’t limit customers’ creativity without the consent of the customer, so these issues are inextricably bound.
Limitations of Veritula
Veritula can help you discover a bit of truth.
It’s not guaranteed to do so. It doesn’t give you a formula for truth-seeking. There’s no guarantee that an idea with no pending criticisms won’t get a new criticism tomorrow. All ideas are tentative in nature. That’s not a limitation of Veritula per se but of epistemology generally (Karl Popper).
There are currently no safeguards against bad actors. For example, people can keep submitting arbitrary criticisms in rapid succession just to ‘save’ their pet ideas. There could be safeguards such as rate-limiting criticisms, but that encourages brigading, making sock-puppets, etc. That said, I think these problems are soluble.
Opposing viewpoints should be defined clearly and openly. Not doing so hinders truth-seeking and rationality (Ayn Rand).
Personal attacks poison rational discussions because they turn an open, objective, impartial truth-seeking process into a defensive mess. It shifts the topic of the discussion from the ideas themselves to the participants in a bad way. People are actually open to harsh criticism as long as their interlocutor shows concern for how it lands (Chris Voss). I may use ‘AI’ at some point to analyze the tone of an idea upon submission.
Veritula works best for conscientious people with an open mind – people who aren’t interested in defending their ideas but in correcting errors. That’s one of the reasons discussions shouldn’t get personal. Veritula can work to resolve conflicts between adversaries, but I think that’s much harder. Any situation where people argue to be right rather than to find truth is challenging. In those cases, it’s best if an independent third party uses Veritula on their behalf to adjudicate the conflict objectively.
Veritula works best for explicit ideas. If you have an inexplicit criticism of an idea, say, make a reasonable effort to make the criticism explicit first, then add it to Veritula. If you can’t, add a placeholder for the inexplicit criticism – something like ‘I have an inexplicit criticism of this idea’. (The distinction between explicit vs inexplicit ideas goes back to David Deutsch. ‘Inexplicit’ means ‘not expressed in words or symbols’.)
I agree, but this criticism chain is about predatory businesses limiting their customers’ creativity, not their own.
It is not the business of the government to prevent people from severely limiting their own creativity.
denies human creativity
No, they’re still creative, and they could overcome the addiction if they knew how, but their creativity is being severely limited.