Will A.I. reach human intelligence?

Will A.I. reach/surpass human intelligence?


  • Total voters
    45
the reason computers are not fault tolerant is because faults are so rare. Transistor don't individually fail to work.

Really? I thought part of the problem was preventing errors from arising in the first place - not in programming languages that are error-free simply because the hardware exists in a state of perfection! I thought that computers aren't fault tolerant due to the nature of the computations: each successive step in a process relies on the results of the computation that preceded it, therefore an error will cascade into a meaningless torrent of garbage. So there's a tradeoff: higher computational speed vs. error tolerance. The human brain is super slow, but it can handle orders of magnitude more error than a digital system; while the digital system makes up in speed what it lacks in slop-handling.

So much of this discussion has so far centered around the fundamental differences between the styles of computation, while avoiding the central premise of the thread, which is 'will AI reach human intelligence DESPITE the differences in hardware, software, and firmware?'

If you view intelligence as an emergent property of a sufficiently sophisticated (and interconnected) processing gadget, then it doesn't matter whether the gadget is biological neurons, digital transistors, or quantum spintronics (if that's even something real??) In my far-from-expert opinion the smart money is on the side of emergent intelligence, and, more impressively, within the next 25 years.
 
If you view intelligence as an emergent property of a sufficiently sophisticated (and interconnected) processing gadget, then it doesn't matter whether the gadget is biological neurons, digital transistors, or quantum spintronics (if that's even something real??) In my far-from-expert opinion the smart money is on the side of emergent intelligence, and, more impressively, within the next 25 years.
Well said.

/thread, IMO :)
 
Souron said:
As for fault tolerance, the reason computers are not fault tolerant is because faults are so rare.

I disagree with this. A digital computer, by definition, behaves in the same way as truth values in classical logic do. In classical logic either a statement comes as true or it comes as false. There doesn't exist the slightest margin for error. So, if it comes in the slightest part false, then the statement comes out false. Since digital computers get designed to have this same basic structure, if they have the slightest fault, they can't work.

It works as very similar to how in classical mathematics if we have ONLY ONE contradiction, no matter how psychologically unconvincing this may feel, than a statement comes out Wrong. Or if a statement implies a falsity of any sort, it comes out wrong.
 
I disagree with this. A digital computer, by definition, behaves in the same way as truth values in classical logic do. In classical logic either a statement comes as true or it comes as false. There doesn't exist the slightest margin for error. So, if it comes in the slightest part false, then the statement comes out false. Since digital computers get designed to have this same basic structure, if they have the slightest fault, they can't work.

Uhh...that's not how computers work. There are billions of transistors in modern processors, which are constantly exposed to heat and radiation. It's impossible that all of them work and stay working over the typical lifetime of a processor. If it was true, that computers wouldn't work if they had the slightest error, then computer performance would not be even close to current technology.

And that's why there is redundancy, error detection and correction included in every computer. There are algorithms, that fed with enough information can detect errors that occurred during a process. In some cases, these errors can be detected right away, in others the computer has to retry to get to the right results. Of course, if there are too many errors, there is a point when everything breaks down and the system stops working. But the errors were already there before and we just don't notice them, because the fault tolerance of computers is good enough to correct them.
 
You're not talking about the same kind of faults.

Uppi is talking about hardware malfunction of the sort that turns a 1 into a 0 as it gets transmitted along a wire. Computers do, of course, have mechanisms for spotting those errors and correcting them.
Spoonwood is talking about errors in logic. Be that a badly written program or a false assumption or something of that sort. Computers, obviously, have no way of spotting those as they just blindly calculate.
 
One bit errors occur mainly during transition and permanent storage. Parity is used to combat those. But you won't find many parity bits on CPU registers. So it you are building a brain chip, you don't have to worry about faults.

Logic errors can make a chip useless, but integrated circuits are always extensively checked. Errors can be expensive otherwise.

Also, since what I am suggesting is a neural net, the same mechanisms of fault tolerance in the real brain can be made present in the synthetic brain. For example, one of the ways that the brain learns is by strengthening neural connections when they are used frequently. One signal error would not do much to the connection, only prolonged use would make a difference. In Artificial neural nets, this mechanism can be implemented by increasing the weight of frequently used connections, and decreasing the weight of unused connections. (This is a form of unsupervised learning). Like the brain a small error in the magnitude or frequency (that is how often a high value is sent) of the signal would, would not have much impact on the artificial brain.
 
Sourn said:
Like the brain a small error in the magnitude or frequency (that is how often a high value is sent) of the signal would, would not have much impact on the artificial brain.

But, how does the computer "understand" "small error"? And on top of that, how does the computer distinguish between "small error" and "very small error" and "medium error"? But, if you do that, haven't you changed the basis on which the computer, from a digital or binary basis, to a graded or graduated basis, on which it does its computations?
 
It doesn't distinguish between errors at all. It's just that small errors will not throw off a long run of the same active signal.

Neural nets generally use real (floating point) numbers so that the weights and sometimes signals are effectively graduated. There are limits to the granularity, though. For a fast neural net implementation, you'd want to get away with as little granularity as possible, particularly in the signals.

BTW - It would probably be more useful to compare neuron counts to neuron count. A brain has 10^11 neurons. So if we assume a Gigahertz clock, and 10^8 transistors per chip, then 10,000 transistors would be available per neuron implementation. This is all very rough calculations though.
 
Top Bottom