Should sentient computers have "Human Rights"?

Outrageous.

Nothing except biological humans can should be entitled to so called 'human rights'. And I can think of plenty of groups of humans who don't deserve the human rights that they get.
 
I don't know if anyone mentioned this paper by Vernor Vinge, but if not, it's an interesting read.

An excerpt:

Within thirty years [written in 1993], we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):

* There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
* Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
* Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
* Biological science may provide means to improve natural human intellect.

The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.

Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees.

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post-Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a _dedication_ that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)
 
A thought just occured to me, if machines became very intelligent, then in many ways this argument could be moot. Because it will probaly be humans that will be the one's begging for rights (provided the intelligent machines could get a hold on the infrastructure that runs our lives).

Innon

That paper is quite interesting, it may just reach a point that humans are no longer neccessary for anything at all, a scary thought indeed
 
If such machines are developed to actually having equivalent intelligence to humans, then we put them to the sole purpose of deep space exploration and interplanetary colonisation. Since life support would be unneccesary, we'd send them through space in fast, small craft and just give them responsibility of terraforming other planets. If they were smart enough, they'd work out how to do it once they got there :)

'Rights' would be irrelevant, because we wouldn't waste or risk such intelligences being present on earth.
 
Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.
I'm not aware of any such trend in AI ... ?

There has been a trend in computing power of course, but whilst having a certain amount of computing power may be a necessity for human-level AI, it is certainly not a sufficiency. Thus, 30 years is a lower bound for how long we have to wait.
 
The idea that A implies B where:

A = Computer complexity has been increasing rapidly
B = Self-aware AI will exist

is pure BS. It's crazy Kurzweilian thinking that doesn't really make sense.

You need much more than just computing power for self-aware intelligence.. and we don't even know what that ingredient might be! Maybe it's a soul ;)

Either way, whenever we run into and/or create self-aware intelligence on the level of ours.. or higher.. we shouldn't just give these beings human rights.. they aren't human!.. Instead, we should create a superset that contains human rights.. Personal rights? Sentient rights? And assign those rights to all sentient & self-aware beings/species at human level or above.

IMO
 
The idea that A implies B where:

A = Computer complexity has been increasing rapidly
B = Self-aware AI will exist

is pure BS. It's crazy Kurzweilian thinking that doesn't really make sense.

You need much more than just computing power for self-aware intelligence.. and we don't even know what that ingredient might be! Maybe it's a soul ;)
At very worst, we just use that massive ball of computing power to run a Human-brain simulator. Feed it sense data from peripherals, and give it means by which to react. We have a working model for sentience on hand, even though we're not entirely sure how or why it works. ;)
 
At very worst, we just use that massive ball of computing power to run a Human-brain simulator. Feed it sense data from peripherals, and give it means by which to react. We have a working model for sentience on hand, even though we're not entirely sure how or why it works. ;)
Yes but we need to know how to make such a simulator - I didn't think we currently knew how to make such a model, in which case the problem is more than simply lacking computing power.
 
At very worst, we just use that massive ball of computing power to run a Human-brain simulator. Feed it sense data from peripherals, and give it means by which to react. We have a working model for sentience on hand, even though we're not entirely sure how or why it works. ;)

How would you simulate quantum tunneling?
 
I also think we will be unable to create a strong AI. That author was too optimistic on progress in that field (people involved with AI traditionally are). We’re fortunately nowhere close to that goal.

But if we do create it I expect that humanity will be finished. Given enough time it will be developed until it is impossible to contain and capable of beating humans, unless humans themselves changed. In any case it would be the end of humanity.
 
Yes but we need to know how to make such a simulator - I didn't think we currently knew how to make such a model, in which case the problem is more than simply lacking computing power.

It's not so hard. Henry Markham's lab has done a decent portion of the work in this field.
 
Qauntum tunneling in itself really doesn't matter. I can buy electronic componants (tunnel diodes) that work on quantum tunneling, and other then having a unique IV (current-voltage) curve there's nothing particularly special about them.

There are two reasons that quantum effects may be important:
1. It provides random data, this can be easily produced by nonbiological means.
2. You get the crazy cherance stuff that Penrose talks about (which sounds like standard aging mathematician/physicist Baloney to me).
 
Well the issue with seperating human rights from computer rights is it is arbitrary instead of some more universal system (IE sapient being rights).
 
Back
Top Bottom