1. We have added the ability to collapse/expand forum categories and widgets on forum home.
    Dismiss Notice
  2. Photobucket has changed its policy concerning hotlinking images and now requires an account with a $399.00 annual fee to allow hotlink. More information is available at: this link.
    Dismiss Notice
  3. All Civ avatars are brought back and available for selection in the Avatar Gallery! There are 945 avatars total.
    Dismiss Notice
  4. To make the site more secure, we have installed SSL certificates and enabled HTTPS for both the main site and forums.
    Dismiss Notice
  5. Civ6 is released! Order now! (Amazon US | Amazon UK | Amazon CA | Amazon DE | Amazon FR)
    Dismiss Notice
  6. Dismiss Notice
  7. Forum account upgrades are available for ad-free browsing.
    Dismiss Notice

Will A.I. reach human intelligence?

Discussion in 'Science & Technology' started by Narz, Apr 24, 2010.

?

Will A.I. reach/surpass human intelligence?

  1. Yes, within 25 years.

    15.6%
  2. Yes, within 100 years.

    26.7%
  3. Yes, but not for a long, long time.

    24.4%
  4. Probably not.

    11.1%
  5. No.

    15.6%
  6. Not sure.

    6.7%
  1. peter grimes

    peter grimes ... Moderator

    Joined:
    Jul 18, 2005
    Messages:
    13,143
    Location:
    Queens, New York
    Really? I thought part of the problem was preventing errors from arising in the first place - not in programming languages that are error-free simply because the hardware exists in a state of perfection! I thought that computers aren't fault tolerant due to the nature of the computations: each successive step in a process relies on the results of the computation that preceded it, therefore an error will cascade into a meaningless torrent of garbage. So there's a tradeoff: higher computational speed vs. error tolerance. The human brain is super slow, but it can handle orders of magnitude more error than a digital system; while the digital system makes up in speed what it lacks in slop-handling.

    So much of this discussion has so far centered around the fundamental differences between the styles of computation, while avoiding the central premise of the thread, which is 'will AI reach human intelligence DESPITE the differences in hardware, software, and firmware?'

    If you view intelligence as an emergent property of a sufficiently sophisticated (and interconnected) processing gadget, then it doesn't matter whether the gadget is biological neurons, digital transistors, or quantum spintronics (if that's even something real??) In my far-from-expert opinion the smart money is on the side of emergent intelligence, and, more impressively, within the next 25 years.
     
  2. Mise

    Mise isle of lucy

    Joined:
    Apr 13, 2004
    Messages:
    28,495
    Location:
    London, UK
    Well said.

    /thread, IMO :)
     
  3. Spoonwood

    Spoonwood Grand Philosopher

    Joined:
    Apr 30, 2008
    Messages:
    4,791
    Location:
    Ohio
    I disagree with this. A digital computer, by definition, behaves in the same way as truth values in classical logic do. In classical logic either a statement comes as true or it comes as false. There doesn't exist the slightest margin for error. So, if it comes in the slightest part false, then the statement comes out false. Since digital computers get designed to have this same basic structure, if they have the slightest fault, they can't work.

    It works as very similar to how in classical mathematics if we have ONLY ONE contradiction, no matter how psychologically unconvincing this may feel, than a statement comes out Wrong. Or if a statement implies a falsity of any sort, it comes out wrong.
     
  4. uppi

    uppi Chieftain

    Joined:
    Feb 2, 2007
    Messages:
    3,458
    Uhh...that's not how computers work. There are billions of transistors in modern processors, which are constantly exposed to heat and radiation. It's impossible that all of them work and stay working over the typical lifetime of a processor. If it was true, that computers wouldn't work if they had the slightest error, then computer performance would not be even close to current technology.

    And that's why there is redundancy, error detection and correction included in every computer. There are algorithms, that fed with enough information can detect errors that occurred during a process. In some cases, these errors can be detected right away, in others the computer has to retry to get to the right results. Of course, if there are too many errors, there is a point when everything breaks down and the system stops working. But the errors were already there before and we just don't notice them, because the fault tolerance of computers is good enough to correct them.
     
  5. Olleus

    Olleus Chieftain

    Joined:
    Oct 30, 2005
    Messages:
    6,102
    Location:
    England
    You're not talking about the same kind of faults.

    Uppi is talking about hardware malfunction of the sort that turns a 1 into a 0 as it gets transmitted along a wire. Computers do, of course, have mechanisms for spotting those errors and correcting them.
    Spoonwood is talking about errors in logic. Be that a badly written program or a false assumption or something of that sort. Computers, obviously, have no way of spotting those as they just blindly calculate.
     
  6. Souron

    Souron The Dark Lord

    Joined:
    Mar 9, 2003
    Messages:
    5,947
    Location:
    (GMT-5)
    One bit errors occur mainly during transition and permanent storage. Parity is used to combat those. But you won't find many parity bits on CPU registers. So it you are building a brain chip, you don't have to worry about faults.

    Logic errors can make a chip useless, but integrated circuits are always extensively checked. Errors can be expensive otherwise.

    Also, since what I am suggesting is a neural net, the same mechanisms of fault tolerance in the real brain can be made present in the synthetic brain. For example, one of the ways that the brain learns is by strengthening neural connections when they are used frequently. One signal error would not do much to the connection, only prolonged use would make a difference. In Artificial neural nets, this mechanism can be implemented by increasing the weight of frequently used connections, and decreasing the weight of unused connections. (This is a form of unsupervised learning). Like the brain a small error in the magnitude or frequency (that is how often a high value is sent) of the signal would, would not have much impact on the artificial brain.
     
  7. Spoonwood

    Spoonwood Grand Philosopher

    Joined:
    Apr 30, 2008
    Messages:
    4,791
    Location:
    Ohio
    But, how does the computer "understand" "small error"? And on top of that, how does the computer distinguish between "small error" and "very small error" and "medium error"? But, if you do that, haven't you changed the basis on which the computer, from a digital or binary basis, to a graded or graduated basis, on which it does its computations?
     
  8. Souron

    Souron The Dark Lord

    Joined:
    Mar 9, 2003
    Messages:
    5,947
    Location:
    (GMT-5)
    It doesn't distinguish between errors at all. It's just that small errors will not throw off a long run of the same active signal.

    Neural nets generally use real (floating point) numbers so that the weights and sometimes signals are effectively graduated. There are limits to the granularity, though. For a fast neural net implementation, you'd want to get away with as little granularity as possible, particularly in the signals.

    BTW - It would probably be more useful to compare neuron counts to neuron count. A brain has 10^11 neurons. So if we assume a Gigahertz clock, and 10^8 transistors per chip, then 10,000 transistors would be available per neuron implementation. This is all very rough calculations though.
     

Share This Page