Are we always wrong?
The statement that “we are always wrong” is contentious, even within fallibilism. Concepts like truth and falsity, degrees of truth, or better and worse explanations all come with their own pitfalls. In this discussion, I hope we can reach a consensus on how to describe fallibilism in a way that acknowledges and addresses these challenges.
Log in or sign up to participate in this discussion.
With an account, you can revise, criticize, and comment on ideas, and submit new ideas.An idea can be either true or false — it’s a binary distinction, and some statements can be absolutely true. However, the critical nuance is that such truth is conditionally absolute. That is, it depends on the background knowledge and underlying assumptions or axioms. For example, 1 + 1 = 2 is absolutely true, but specifically within the framework of the Peano axioms.
There isn’t a clear logical or computational method for determining whether one explanation is better than another. However, David Deutsch offers useful criteria for evaluating explanations. He suggests that a good explanation is better than a rival if it explains more — meaning it has fewer errors, fewer loose ends, or a broader explanatory range (i.e., it accounts for more phenomena). I believe Popper also describes a solution to be better if it has less unintended consequences than a rival idea. <my interpretations, not quotes>.
[Deutsch] suggests that a good explanation is better than a rival if it explains more — meaning it has fewer errors, fewer loose ends, or a broader explanatory range (i.e., it accounts for more phenomena). I believe Popper also describes a solution to be better if it has less unintended consequences than a rival idea. <my interpretations, not quotes>.
Citations needed, that disclaimer not withstanding.
We can't always be wrong, because that implies that correct ideas are not expressible, which makes no sense.
I think there is a sense in which we cannot always be sure that we are right, as there's always some possibility that we are wrong, even if we think we are completely right. And if we are completely right, there is nothing that is "manifest" about that.
Let's say I open my fridge, and there is cheese there, I conclude "I have cheese in my fridge". I may be hallucinating, or wrong about the category of cheese, or it just appears like cheese, or whatever. In that sense I could potentially be wrong. However I find it silly to think that I am infinitely wrong in my assessment of where my food is, all the time. That's like saying that we don't know what happens after we die. We do in every single way in which we use the term "know".
I think this idea that we are always wrong needs a rephrase, such as "we could always consider how we could be wrong", or "there is nothing that justifies our true belief", or "we could and should always criticise", or "nothing exists outside of criticism" (as we picked 1+1 and not 1+2 for some critical reason). The rephrase leaves open the possibility of being right a lot, like about where your food is, because you just found it, while still leaving open the possibility that the cheese you just saw is actually your butter.
We do in every single way in which we use the term "know".
Don’t people disagree about what ‘know’ means? As in, some think it means they’re justified in their belief, others think they have corrected a sufficient amount of errors, etc…
Sure, philosophers and pedants do. But typically people use the word "know" in situations well short of being absolutely sure.
We cannot always be wrong. If all our ideas are false, then so is the the idea that all our ideas are false.