The latter. Let's just say that knowledge is reliable information. You don't even need a believer. This gives us the immediate bonus of acknowledging that algorithms, databases etc. do contain knowledge: which strikes me as the right thing to say, capturing the everyday use-meaning of the word "knowledge".
This isn't to deny that internally justifiable beliefs are an important part of philosophy. And if we like the traditional five-fold division of philosophy, then discussions of internal justification belong in the "epistemology" bin.
The latter. Let's just say that knowledge is reliable information. You don't even need a believer. This gives us the immediate bonus of acknowledging that algorithms, databases etc. do contain knowledge: which strikes me as the right thing to say, capturing the everyday use-meaning of the word "knowledge".
I'm sure databases and algorithms can contain knowledge, but can they actually possess it? I don't think so, but maybe that's just my personal bias speaking (i.e. I think that dogs can know who their owner is, but database can't know whatever it is that they're storing, even though the two are legitimately analogous). I think that my problem is that it's impossible to reconcile these two conceptions of knowledge (or at least, impossible for me to think of a way to do it...).
I'm happy to accept that knowledge is defined either way, though, even if that means that my beliefs on what is and isn't knowledge don't match up to what is definitionally true.
I don't know if it's occurred to anyone over the course of this extensively fascinating and mind-bending thread, but the dictionary seems to have this covered pretty nicely.
You mean, problems other than the fact that internal justification goes by the wayside? Because I think that problem is dealt with (or should we say, ducked) easily enough - by acknowledging that internal justification is important too, in its own right. It needn't be sewn into the same package with knowledge.
Say that knowledge is true information reliably produced, and we can be more expansive about knowledge, rather than confining our attention to individual cognizing organisms.
I'm confused as to how you can construct a Gettier scenario about reliability. You reliably sense something, yet the reliability of the connection from world to representation is itself somehow accidental ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.