Within thirty years [written in 1993], we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.
The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
* There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
* Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
* Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
* Biological science may provide means to improve natural human intellect.
The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees.
If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post-Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a _dedication_ that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)
I'm not aware of any such trend in AI ... ?Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.
At very worst, we just use that massive ball of computing power to run a Human-brain simulator. Feed it sense data from peripherals, and give it means by which to react. We have a working model for sentience on hand, even though we're not entirely sure how or why it works.The idea that A implies B where:
A = Computer complexity has been increasing rapidly
B = Self-aware AI will exist
is pure BS. It's crazy Kurzweilian thinking that doesn't really make sense.
You need much more than just computing power for self-aware intelligence.. and we don't even know what that ingredient might be! Maybe it's a soul![]()
Yes but we need to know how to make such a simulator - I didn't think we currently knew how to make such a model, in which case the problem is more than simply lacking computing power.At very worst, we just use that massive ball of computing power to run a Human-brain simulator. Feed it sense data from peripherals, and give it means by which to react. We have a working model for sentience on hand, even though we're not entirely sure how or why it works.![]()
At very worst, we just use that massive ball of computing power to run a Human-brain simulator. Feed it sense data from peripherals, and give it means by which to react. We have a working model for sentience on hand, even though we're not entirely sure how or why it works.![]()
How is quantum tunneling a necesity in recreating the functions of a nerve cell?How would you simulate quantum tunneling?
Yes but we need to know how to make such a simulator - I didn't think we currently knew how to make such a model, in which case the problem is more than simply lacking computing power.
How is quantum tunneling a necesity in recreating the functions of a nerve cell?
How is quantum tunneling a necesity in recreating the functions of a nerve cell?
Qauntum tunneling in itself really doesn't matter. I can buy electronic componants (tunnel diodes) that work on quantum tunneling, and other then having a unique IV (current-voltage) curve there's nothing particularly special about them.
What about sentient computers?Humans get human rights
Animals get animal rights
Computers get computer rights