What knowledge actually is is one of the great philosophical questions. Like the existence of God, the nature of man and the universe, and so on.I know they are mediums, but what is knowledge?
L-systems, cellular automata, neurons, are roughly comparable. Why can neurons contain knowledge but not cellular automata?
Please rephrase without using the word "realise".
Link between 'things' allowing the 'things' to discover a solution? That's exactly what a cellular automata is capable of.
They can contain knowledge, but I don't believe they can "possess" knowledge. I believe that possessing knowledge requires some kind of consciousness to appreciate.
hmmBut I think DNA and algorithms and such can do this, and thus possess knowledge.
...There seems to be something unique about a conscious mind that lets it both return truth-values and grasp them, i.e. to have beliefs.
Something of value in there, I hope.
Can algorithms, databases, gene pools, or any other non-being possess knowledge?
In the classic formulation, to know X is 1) to believe X, 2) where X is true, and 3) where your belief in X is justified. The history of epistemology is basically an argument about what 3) means and whether it's actually necessary.
If you think belief without justification is insufficient for knowledge, then it's hard to see how an unconscious thing such as an algorithm could know something. If you don't, then you'd still have to revise 1) to something like "expresses X" or "returns a positive truth-value for X".
Even then, you have the corollary problem that knowledge seems to require awareness of X's truth status-- that's where 'belief' comes in. For example, a rock in some reductive sense contains information in its physical structure, simply by virtue of its existence, but you couldn't ascribe knowledge to the rock on that basis. Is a computer any different? It's hard to see how. If one of its circuits contains information, then it can have another circuit that monitors the first (i.e. has 'knowledge' of it), but then the monitoring is simply a physical fact that requires another circuit, etc. There seems to be something unique about a conscious mind that lets it both return truth-values and grasp them, i.e. to have beliefs.
Something of value in there, I hope.
In the classic formulation, to know X is 1) to believe X, 2) where X is true, and 3) where your belief in X is justified. The history of epistemology is basically an argument about what 3) means and whether it's actually necessary.
If you think belief without justification is insufficient for knowledge, then it's hard to see how an unconscious thing such as an algorithm could know something. If you don't, then you'd still have to revise 1) to something like "expresses X" or "returns a positive truth-value for X".
Even then, you have the corollary problem that knowledge seems to require awareness of X's truth status-- that's where 'belief' comes in. For example, a rock in some reductive sense contains information in its physical structure, simply by virtue of its existence, but you couldn't ascribe knowledge to the rock on that basis. Is a computer any different? It's hard to see how. If one of its circuits contains information, then it can have another circuit that monitors the first (i.e. has 'knowledge' of it), but then the monitoring is simply a physical fact that requires another circuit, etc. There seems to be something unique about a conscious mind that lets it both return truth-values and grasp them, i.e. to have beliefs.
Something of value in there, I hope.