the reason computers are not fault tolerant is because faults are so rare. Transistor don't individually fail to work.
Really? I thought part of the problem was preventing errors from arising in the first place - not in programming languages that are error-free simply because the hardware exists in a state of perfection! I thought that computers aren't fault tolerant due to the nature of the computations: each successive step in a process relies on the results of the computation that preceded it, therefore an error will cascade into a meaningless torrent of garbage. So there's a tradeoff: higher computational speed vs. error tolerance. The human brain is super slow, but it can handle orders of magnitude more error than a digital system; while the digital system makes up in speed what it lacks in slop-handling.
So much of this discussion has so far centered around the fundamental differences between the styles of computation, while avoiding the central premise of the thread, which is 'will AI reach human intelligence DESPITE the differences in hardware, software, and firmware?'
If you view intelligence as an emergent property of a sufficiently sophisticated (and interconnected) processing gadget, then it doesn't matter whether the gadget is biological neurons, digital transistors, or quantum spintronics (if that's even something real??) In my far-from-expert opinion the smart money is on the side of emergent intelligence, and, more impressively, within the next 25 years.