Activity Feed
“Justification without finality is fake.” (#4391) In other words, if it doesn’t claim to be final, it’s not justification.
#4262·Dennis HackethalOP, 17 days agoAnother idea: letting users post ideas to their own profile. Such ideas wouldn’t be part of a discussion.
Implemented as of ecc72ff. Check your profile.
This is the first idea posted straight to my profile, outside of discussions.
Acknowledge the contradiction between disregarding market developments and taking them into account
Dollar-Cost Averaging
Dollar-cost averaging (DCA) is when you invest a fixed amount on a regular basis regardless of market developments.
This practice can work well long term for assets that reflect the value of the entire stock market (or a big part of it).
Long term, we can expect the stock market as a whole to gain value. So if you invest part of your income every month, say, then your position will grow in the long run.
In the meantime, you get to reduce risk by not investing all your money at once. You also get to react to developments that affect the stock market and can decide to interrupt your investment schedule. But I personally like ‘boring’ investment strategies, meaning strategies that are automated and reliable.
Dollar-Cost Averaging
Dollar-cost averaging (DCA) is when you invest a fixed amount on a regular basis regardless of market developments.
This practice can work well long term for assets that reflect the value of the entire stock market (or a big part of it).
Long term, we can expect the stock market as a whole to gain value. So if you invest part of your income every month, say, then your position will grow in the long run.
In the meantime, you get to reduce risk by not investing all your money at once. You also get to react to developments that affect the stock market and can decide to interrupt your investment schedule. But again, the idea is typically to invest regardless of market developments. I personally like ‘boring’ investment strategies, meaning strategies that are automated and reliable.
#4152·Dennis Hackethal, 24 days agoDollar-Cost Averaging
Dollar-cost averaging (DCA) is when you invest a fixed amount on a regular basis regardless of market developments.
This practice can work well long term for assets that reflect the value of the entire stock market (or a big part of it).
Long term, we can expect the stock market as a whole to gain value. So if you invest part of your income every month, say, then your position will grow in the long run.
In the meantime, you get to reduce risk by not investing all your money at once. You also get to react to developments that affect the stock market and can decide to interrupt your investment schedule. But I personally like ‘boring’ investment strategies, meaning strategies that are automated and reliable.
… regardless of market developments.
vs
You also get to react to developments …
A contradiction.
#4393·Dennis HackethalOP, about 11 hours agoBut this sounds like you’re saying justificationism is necessarily the same as foundationalism. Whereas in #4392 you agreed it’s only a kind of justifiationism.
Why does this sound like I am equating them?
#4391·Dirk Meulenbelt, about 11 hours agoIndeed. Justification without finality is fake.
"X is true because of Y, but we can discuss Y"
Is functionally the same as
"X is true and we can discuss why"
But this sounds like you’re saying justificationism is necessarily the same as foundationalism. Whereas in #4392 you agreed it’s only a kind of justifiationism.
#4383·Dennis HackethalOP revised about 13 hours agoDirk writes:
Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
I’m not sure foundationalism and justificationism are quite the same thing.
From BoI ch. 1 glossary:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.
I could see foundationalism being a flavor of justificationism, but not the same thing.
I’m not sure foundationalism and justificationism are quite the same thing.
You are right. Foundationalism is a kind of justificationism. The secure foundation is a kind of justification.
I will have to rewrite this in my article.
#4386·Dennis HackethalOP, about 13 hours agoJust because Dirk’s notion of justificationism breaks with BoI’s doesn’t mean Dirk is wrong. BoI could be wrong.
Indeed. Justification without finality is fake.
"X is true because of Y, but we can discuss Y"
Is functionally the same as
"X is true and we can discuss why"
The same passage quoted in #4388 (the first one) just links to an entire page with no quotes or section information. That makes verifying the information harder: readers would have to read the entire page.
Sources should be specific: either give a verbatim quote or link to a highlight.
The same passage quoted in #4388 (the first one) links to a secondary source on Popper. Secondary sources on Popper are usually bad. Use a primary source – something Popper himself said.
The article says:
A follower of the philosopher Karl Popper would object: isn’t this just foundationalism in disguise? … Popper showed that’s impossible: any justification needs a deeper justification, and that one needs another, so you either chase reasons forever or stop at one you can’t defend.
I didn’t read the entire linked page, but based on a word search for ‘regress’, it attributes the infinite-regress problem to Hans Albert, not Popper:
[Albert] argues that any attempt at justification faces a three-pronged difficulty that is traceable to Agrippa: One alternative leads to an infinite regress as one seeks to prove one assumption but then needs to assume some new one…
#4386·Dennis HackethalOP, about 13 hours agoJust because Dirk’s notion of justificationism breaks with BoI’s doesn’t mean Dirk is wrong. BoI could be wrong.
For a tiebreaker, consider this Wiktionary definition of justificationism (links removed):
An approach that regards the justification of a claim as primary, while the claim itself is secondary…
Since this quote doesn’t mention finality, it sounds more in line with BoI.
#4385·Dennis HackethalOP, about 13 hours agoThe article says:
[Justificationism] is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
This definition breaks with BoI. The glossary from ch. 1 says:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Note that this second quote says nothing about finality “beyond question”.
Just because Dirk’s notion of justificationism breaks with BoI’s doesn’t mean Dirk is wrong. BoI could be wrong.
The article says:
[Justificationism] is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
This definition breaks with BoI. The glossary from ch. 1 says:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Note that this second quote says nothing about finality “beyond question”.
Dirk writes:
Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
I’m not sure foundationalism and justificationism are quite the same thing.
From BoI ch. 1 glossary:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.
I could see foundationalism being a flavor of justificationism, but not the same thing.
Dirk writes:
Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
I’m not sure foundationalism and justificationism are quite the same thing.
From BoI ch. 1 glossary:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.
I could see foundationalism being a flavor of justificationism, but not the same thing.
Dirk writes:
Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question.
I’m not sure foundationalism and justificationism are quite the same thing.
From BoI ch. 1 glossary:
[Justificationism is t]he misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
Whereas foundationalism describes a prerequisite for knowledge to grow (properly). As in, needing a secure foundation or else the whole edifice falls apart.
I could see foundationalism being a flavor of justificationism, but not the same thing.
#4379·Benjamin Davies, 2 days agoThe same decision may be appealed only once.
Does this not inhibit error correction? Why not just leave this to the discretion of Veritula, on a case by case basis?
As written, a limitation is placed on users, not on Veritula. I want to set expectations and protect my time by preventing an obligation to have extended discussions over moderation decisions. I remain free to make exceptions.
#4378·Benjamin DaviesOP, 2 days agoPredatory businesses can’t limit customers’ creativity without the consent of the customer, so these issues are inextricably bound.
I have zero experience on the drug market, but I think it’s fair to assume that companies that want to get business by inhibiting people’s creativity rather than enhancing it don’t particularly care about consent.
I don’t expect honest advertising from such people. I expect trickery, not consent.
#4365·Dennis HackethalOP revised 3 days agoRules for Participation
Veritula welcomes a wide range of discussion topics. Generally speaking, people have free speech here. Unpopular topics will not automatically get people banned. The goal of moderation is to preserve productive, truth-seeking discussion.
Behavior that is intended, or likely, to sabotage debate or prevent progress is a bannable offense. Such behavior includes, but is not limited to, harassment, brigading, rage baiting, public shaming, and persistent bad-faith argumentation or refusal to engage substantively.
Veritula takes intellectual property seriously and reserves the right to take down content that infringes on others’ intellectual property.
Veritula also reserves the right to take down obscene content such as pornography.
Serious instances of off-platform behavior that clearly would have violated these rules on-platform may result in removal.
Depending on the severity of an infraction, moderators may issue a warning, temporarily lock an account, or permanently ban the account.
Looking for loopholes in these rules, or abusing the letter to violate the spirit of these rules, is a bannable offense.
Moderation decisions are at the discretion of Veritula.
Users may appeal moderation decisions by contacting the moderators within a reasonable time after a decision. Appeals should explain why the decision was wrong. Appeals are reviewed at the moderators’ discretion. The same decision may be appealed only once.
Talks with moderators should remain respectful and constructive. Changes to these rules should be proposed before issues arise by criticizing this idea.
The same decision may be appealed only once.
Does this not inhibit error correction? Why not just leave this to the discretion of Veritula, on a case by case basis?
#4375·Dennis Hackethal, 2 days agoI agree, but this criticism chain is about predatory businesses limiting their customers’ creativity, not their own.
Predatory businesses can’t limit customers’ creativity without the consent of the customer, so these issues are inextricably bound.
Limitations of Veritula
Veritula can help you discover a bit of truth.
It’s not guaranteed to do so. It doesn’t give you a formula for truth-seeking. There’s no guarantee that an idea with no pending criticisms won’t get a new criticism tomorrow. All ideas are tentative in nature. That’s not a limitation of Veritula per se but of epistemology generally (Karl Popper).
There are currently no safeguards against bad actors. For example, people can keep submitting arbitrary criticisms in rapid succession just to ‘save’ their pet ideas. There could be safeguards such as rate-limiting criticisms, but that encourages brigading, making sock-puppets, etc. That said, I think these problems are soluble.
Opposing viewpoints should be defined clearly and openly. Not doing so hinders truth-seeking and rationality (Ayn Rand).
Personal attacks poison rational discussions because they turn an open, objective, impartial truth-seeking process into a defensive mess. It shifts the topic of the discussion from the ideas themselves to the participants in a bad way. People are actually open to harsh criticism as long as their interlocutor shows concern for how it lands (Chris Voss). I may use ‘AI’ at some point to analyze the tone of an idea upon submission.
Veritula works best for conscientious people with an open mind – people who aren’t interested in defending their ideas but in correcting errors. That’s one of the reasons discussions shouldn’t get personal. Veritula can work to resolve conflicts between adversaries, but I think that’s much harder. Any situation where people argue to be right rather than to find truth is challenging. In those cases, it’s best if an independent third party uses Veritula on their behalf to adjudicate the conflict objectively.
Veritula only works for explicit ideas. If you have an inexplicit criticism of an idea, say, then Veritula can’t help with that until you’re able to write the criticism down, at which point it’s explicit. (The distinction between explicit vs inexplicit ideas goes back to David Deutsch. ‘Inexplicit’ means ‘not expressed in words or symbols’.)
Limitations of Veritula
Veritula can help you discover a bit of truth.
It’s not guaranteed to do so. It doesn’t give you a formula for truth-seeking. There’s no guarantee that an idea with no pending criticisms won’t get a new criticism tomorrow. All ideas are tentative in nature. That’s not a limitation of Veritula per se but of epistemology generally (Karl Popper).
There are currently no safeguards against bad actors. For example, people can keep submitting arbitrary criticisms in rapid succession just to ‘save’ their pet ideas. There could be safeguards such as rate-limiting criticisms, but that encourages brigading, making sock-puppets, etc. That said, I think these problems are soluble.
Opposing viewpoints should be defined clearly and openly. Not doing so hinders truth-seeking and rationality (Ayn Rand).
Personal attacks poison rational discussions because they turn an open, objective, impartial truth-seeking process into a defensive mess. It shifts the topic of the discussion from the ideas themselves to the participants in a bad way. People are actually open to harsh criticism as long as their interlocutor shows concern for how it lands (Chris Voss). I may use ‘AI’ at some point to analyze the tone of an idea upon submission.
Veritula works best for conscientious people with an open mind – people who aren’t interested in defending their ideas but in correcting errors. That’s one of the reasons discussions shouldn’t get personal. Veritula can work to resolve conflicts between adversaries, but I think that’s much harder. Any situation where people argue to be right rather than to find truth is challenging. In those cases, it’s best if an independent third party uses Veritula on their behalf to adjudicate the conflict objectively.
Veritula works best for explicit ideas. If you have an inexplicit criticism of an idea, say, make a reasonable effort to make the criticism explicit first, then add it to Veritula. If you can’t, add a placeholder for the inexplicit criticism – something like ‘I have an inexplicit criticism of this idea’. (The distinction between explicit vs inexplicit ideas goes back to David Deutsch. ‘Inexplicit’ means ‘not expressed in words or symbols’.)
#4374·Benjamin DaviesOP, 2 days agoIt is not the business of the government to prevent people from severely limiting their own creativity.
I agree, but this criticism chain is about predatory businesses limiting their customers’ creativity, not their own.
#4373·Dennis Hackethal, 3 days agodenies human creativity
No, they’re still creative, and they could overcome the addiction if they knew how, but their creativity is being severely limited.
It is not the business of the government to prevent people from severely limiting their own creativity.