Many of the disputes I’ve seen about knowledge/epistemology seem to have semantic roots, which can be solved by clarifying their use.
I want to ignore all such disputes in this post, and focus on all definitions of knowledge that rely on truth.
The problem with this is that it’s impossible to be certain of any truth. A typical account of knowledge goes something like this:
a. I know P if and only if
i. P is true,
ii. I believe P, and
iii. I have sufficient grounds to justify believing P.[1]
We cannot be certain of anything about the physical world, since there’s always the possibility we’re dreaming, or being systematically deceived by some powerful entity.
We also can’t be certain of any logical conclusion since for any logical inference we draw, no matter how obvious it seems, there’s a non-zero chance that we’ve made a mistake in our reasoning. So however many times we check even our simplest conclusions, we can never reduce a non-zero chance of error to zero.
Since probability/plausibility is one of the many things we can never be certain of (for the same reasons), we can’t even honestly claim any level of confidence in even the most simple of propositions. So we can’t say, for eg, that something as simple as ‘a=a’ is even 'probably' true.
Insofar as we define ‘knowledge’ as something that relies on truth, we can’t claim to have it.
Instead you could define knowledge as something that doesn’t rely on truth so that we have it by definition. Or you could claim that we have knowledge even if we believe something that coincidentally happens to be true - in which case since we don’t know what is true, we might happen to have knowledge. Both of these are obviously semantic fudges, that would just obscure the point that we can’t ever be certain of things.
That said, in place of certainty we make assumptions. Certain assumptions (those I’ve described on that page) continually serve us well enough that for almost all purposes, for the most part we might as well treat them and their derivatives as ‘knowledge’, if only because our language doesn’t really give us a practical alternative.
This means that can make assured-sounding statements like ‘we can’t claim to have knowledge’ without contradicting ourselves (if that claim turns out to be false, it wouldn’t necessarily negate anything I’ve written - we might only have knowledge that we haven’t even realised it would be possible to have - or perhaps it’s false yet doesn’t change any of what I’ve written because we’ve made a fundamental error in our reasoning akin to getting a=a wrong). We can act like we know things, but we can’t really understand where our supposed knowledge is coming from until we realise that we don’t have it at all.
The best we can do is blindly trust our assumptions.
[1] Adapted from http://www.ditext.com/gettier/gettier.html
I want to ignore all such disputes in this post, and focus on all definitions of knowledge that rely on truth.
The problem with this is that it’s impossible to be certain of any truth. A typical account of knowledge goes something like this:
a. I know P if and only if
i. P is true,
ii. I believe P, and
iii. I have sufficient grounds to justify believing P.[1]
We cannot be certain of anything about the physical world, since there’s always the possibility we’re dreaming, or being systematically deceived by some powerful entity.
We also can’t be certain of any logical conclusion since for any logical inference we draw, no matter how obvious it seems, there’s a non-zero chance that we’ve made a mistake in our reasoning. So however many times we check even our simplest conclusions, we can never reduce a non-zero chance of error to zero.
Since probability/plausibility is one of the many things we can never be certain of (for the same reasons), we can’t even honestly claim any level of confidence in even the most simple of propositions. So we can’t say, for eg, that something as simple as ‘a=a’ is even 'probably' true.
Insofar as we define ‘knowledge’ as something that relies on truth, we can’t claim to have it.
Instead you could define knowledge as something that doesn’t rely on truth so that we have it by definition. Or you could claim that we have knowledge even if we believe something that coincidentally happens to be true - in which case since we don’t know what is true, we might happen to have knowledge. Both of these are obviously semantic fudges, that would just obscure the point that we can’t ever be certain of things.
That said, in place of certainty we make assumptions. Certain assumptions (those I’ve described on that page) continually serve us well enough that for almost all purposes, for the most part we might as well treat them and their derivatives as ‘knowledge’, if only because our language doesn’t really give us a practical alternative.
This means that can make assured-sounding statements like ‘we can’t claim to have knowledge’ without contradicting ourselves (if that claim turns out to be false, it wouldn’t necessarily negate anything I’ve written - we might only have knowledge that we haven’t even realised it would be possible to have - or perhaps it’s false yet doesn’t change any of what I’ve written because we’ve made a fundamental error in our reasoning akin to getting a=a wrong). We can act like we know things, but we can’t really understand where our supposed knowledge is coming from until we realise that we don’t have it at all.
The best we can do is blindly trust our assumptions.
[1] Adapted from http://www.ditext.com/gettier/gettier.html