Should sentient computers have "Human Rights"?

Stylesjl

SOS Brigade Member
Joined
Apr 30, 2005
Messages
3,698
Location
Australia
If a computer system were to reach the point where it could demonstrate a near, equal or even above human intelligence, should such a being be endowed with certain inaleinable rights from human beings?

Certain rights wouldn't apply to a computer, simply because of it's nature ie the right to not be subjected to torture or cruel inhumane punishment is a sensory and emotional right for biological beings that a computer wouldn't have (or could easily ignore when neccessary)

But something like the right to not be deleted or have it's data tampered with could be a right that a computer could demand once it reaches a particular level of intelligence (and isn't programmed to be a complete lackey to humans)

What do you think of the idea?
 
If it ever becomes possible to program one, then id say it should be illegal to program them to become sentient, but the ones who are already sentient should have rights..
 
Very unlikely to happen (computer being sentient). Even human beings, or less intelligent animals, are not sentient by all definitions. For example if a human being was under the impression that he/she was a computer, it would be difficult to make him think otherwise, simply because we give a lot of significance to general notions of belonging to the same species/have the same rights/are generally like one another.
Furthermore a computer would not be exactly human if it had no drives. The movies where computers are shown like human beings without 'emotions' are entirely childish; emotions are thought nebulae; they do not exist without thoughts, and neither do thoughts exist without them.
So i find it very difficult to think that a computer could ever be more than a collection of commands, which at best could be numerous enough, and have astronomical interval variators of sub-routines, so as to give the *impression* that the thing is actually thinking. But it would not be thought.

edit: then again, i am not against the idea that human beings could have been made as a robotic species. After all if a civilization could alter dna in such a way so as to calculate for a small amount of time (eg 100 years) that the being produced would be able to function in some basic ways, and then evolve to grasp its own existence (the study of the brain is still in its very early age) then that civ could build something like us. Not saying that this happened, or that i would go as far as say that it is probable, but i do not see it as impossible. Which brings the question of whether human beings are really that sentient, once again ;)
 
I just assume its possible, because if we ever understand how the human brain works then it would be possible.

Actually it must be possible, the human brain existing proves this point.
 
I am not sure such a thing will ever be possible (sentient AI) but if it is then yes; they should have rights. I think 'human rights' should simply become 'sentient rights' in this situation.

Read Ian M. Banks culture novels for a nice idea of an advanced human-AI society.

Or Dan Simmons. Or Alastair Reynolds.

Asimov is great too but not very realistic, I think.
 
Are they human? If not, then no.

I could see people like you starting a HSM (Human supremacy movement). :lol:

This type of action is dangerous. Both the racism and the giving of sentience.

You guys are forgetting one thing...

If we could give them sentience we could program them not to want rights!!!!!!!!!!!!!!
 
Human rights should extend only to humans.

This isnt even an issue, if we can give them sentience then its just as easy to program them to have "wanting to be a slave"-like personalities.

So they wouldnt want rights.
 
I guess they wouldn't really be sentient, then.

Why not? They could still think and consider all the options, they just wouldnt want to have rights.

I can think and consider every option in life, but i dont want to ever join the military. Am i not sentient?
 
Are they human? If not, then no.

But they would be sentient...

Presumably you would be okay with enslaving a sentient alien species then - lets say they are very intelligent but are only at a stone-age civ when we discover them in our FTL ships. Is it okay to enslave them because they are aliens?

I am very interested in your answer, and if it's 'yes, enslave' how you think that is moral. And if it's 'no, sentient aliens would have rights' how sentient AI is different to sentient aliens.
 
No,

十つ字限界
 
I just assume its possible, because if we ever understand how the human brain works then it would be possible.

Actually it must be possible, the human brain existing proves this point.

This is a tricky point though.
Imagine a program which calculates prime numbers. It runs a routine. Now lets say that those primes were something else, a sort of multi-routine sequence which allowed interconnections, like neural cells. This would mean that the program could make those connections, in a set path, alterable from its starting number and other variables (sort of analogous to human differences in IQ, basic parallelism of course). So now the program could calculate a different sequence of movement between all those possible points (eg prime numbers in the line of natural numbers) and in 3d, using some other routine (afterall human thought is, to a degree, just use of basic ideas to form ever more complicated ones, with 'basic' ideas being themselves something very complicated of course as well), and therefore it would have been able to make an astronomically large number of different movements from possible point to other possible point, and connect them in all sorts of ways.
But it still would not be trying to examine its programmed limitations. If it actually tried to then it would cause stability issues. Afterall, if you are programmed to move from A to B, trying to examine what A and B is will cause unthought of difficulties. This can be solved by placing A and B in other levels than those the program uses to expand its motifs of movement, of course, but the problem remains that one cannot think of the actual basis of one's thought, for the simple reason that while thinking he already would be utilising a base out of reach.
In a way it is like the paradoxes of Zeno: you cannot begin to examine something in absolute depth, because you already would have needed to have not began already, since by simply beginning you are already moving away from the point you were when you started, and now have stirred a huge amount of other mechanisms in thought which cover even more that point below.
Besides, it can put in simpler terms: if you were aware of the programming itself, you would have had placed yourself outside of the program, and thus have been destroyed, since you need the program so as to function :)
 
Back
Top Bottom