Hawking et al: Transcending Complacency on Superintelligent Machines

Animal cruelty laws? If you go out and start killing or even just randomly beating animals for no reason and someone reports it, you are going to jail.

You could even interpret any environmental protection laws as us attempting to recognize and protect the right to life for non-human life on this planet.

So while the laws we have may be poorly enforced, we do have those laws. Which means we do officially recognize the right to life for non-humans on Earth. This recognition seems to be something the anti-technology crowd would not be willing to extend to a sapient AI; and that is outrageous to me.

The majority of those laws pretty much naked self-interest. And I don't know that we really care all that much past making sure people don't take our pets/livestock or the things that our society has heavily anthropomorphized. Even at the basest level randomly deciding to torture/kill animals for fun is correlated with people that are also dangerous to fellow humans, so it's in our best interest to have rules that criminalize such behavior so we can deal with those people should they be identified.
 
I came across Transcendence on ScienceFriday. They don't review the movie so much as examine the premise and plot from the point of view of our current understanding of AI, neurology, etc. The talk with Stuart Russell (UCB prof of Comp Sci/Eng) and Cristof Koch (CSO Allen Institute for Brain Science)

Ooh, nice find, thanks.


They use a premise I find very hard to buy:
Since lossy integration would necessitate continuous damage to existing memories, we propose it is more natural to frame consciousness as a lossless integrative process.

Um, no. We have craploads of information coming in; what's a few bytes lost here and there?

And from the layman's summary:
“Memory functions must be vastly non-lossy, otherwise retrieving them repeatedly would cause them to gradually decay,” say Maguire and co.

But I'm pretty sure that cognitive scientists have found that retrieving memories causes them to, not decay so much as re-interpret. Which probably results in some loss of the original information. (Edit: oh, the layman's article goes on to make a similar point. Great minds yada yada)

On the other hand, their "punchline" conclusion - the brain is not a computer - is probably right. An algorithm by classical definition is necessarily Turing-computable. Thus "brain as computer" implies that the brain operates digitally (it has a finite number of possible states, and recognizes a finite number of inputs.) But neurons are analog machines. In standard models, each input synapse has a real-valued "weight" toward firing the neuron's output. Moreover, neural circuits routinely operate near the critical point where just a little more excitatory or inhibitory input changes the outcome. So, it may be that a brain can't even be usefully approximated as a Turing machine.
 
The majority of those laws pretty much naked self-interest. And I don't know that we really care all that much past making sure people don't take our pets/livestock or the things that our society has heavily anthropomorphized. Even at the basest level randomly deciding to torture/kill animals for fun is correlated with people that are also dangerous to fellow humans, so it's in our best interest to have rules that criminalize such behavior so we can deal with those people should they be identified.

That's beside the point though. Regardless of the reason behind the law, the law still gives official recognition to a non-human's right to life. I'm not entirely sure why you are making such a big deal out of this.
 
Human conciousness is like magic and can't be programmed -
Scientific study
Layman's summary



Not sure if this has been posted, but it seems related to the topic.

I am curious if there has been a study on what would happen if the senses were mixed at the time of someone's first whiff of chocolate and they related that smell to something else? If that is impossible to do, is there already information stored that would always identify that smell as chocolate?
 
That's beside the point though. Regardless of the reason behind the law, the law still gives official recognition to a non-human's right to life. I'm not entirely sure why you are making such a big deal out of this.

I don't know that it's a big deal, much less such a big deal. I just don't happen to agree. The reasons behind rules is almost always not besides the point though. The reasons influence compliance, punishment, acquiescence, and disdain for the rule. Which makes sense, seeing as rules themselves are morally neutral, they're tools, not the goals themselves.
 
I don't know that it's a big deal, much less such a big deal. I just don't happen to agree. The reasons behind rules is almost always not besides the point though. The reasons influence compliance, punishment, acquiescence, and disdain for the rule. Which makes sense, seeing as rules themselves are morally neutral, they're tools, not the goals themselves.

It's beside the point in the context in which we are discussing it. I stated we extend rights and protections to non-humans and you implied that we do not. I brought up animal cruelty and environmental protection laws to illustrate that we do extend such protection to non-humans.

Now we can go back and forth about the motivations and effectiveness of such protections (in fact I tend to agree with your position about the motivations behind our laws), but that has no bearing on the fact that those protections do exist. That's what confused me about your response. You seemed to be arguing against a point that I wasn't making.

The broader point that I was trying to make was that I fear we as a society won't even extend those same basic, poorly-enforced protections to synthetic life because the prevailing opinion seems to be that a sapient AI wouldn't truly be alive. Combine that with the attitude that since we created the AI, it should always be subservient to humans and you have a recipe for a civil rights disaster. I mean, how long do you think a sapient AI is going to accept being called someone's property?
 
Depends on how that sapient AI works. The only thing I'm arguing is that we'll likely respect an AI's right to live, or however you want to phrase it, to such extent as we view it in our own best interest to do so. If such AI is controllable and easily replaceable...
 
Depends on how that sapient AI works. The only thing I'm arguing is that we'll likely respect an AI's right to live, or however you want to phrase it, to such extent as we view it in our own best interest to do so. If such AI is controllable and easily replaceable...

And I'm saying we need to change that line of thought. I think when the time comes synthetic life should be granted the same rights and privileges that we would grant any human. They should be given full citizenship, be allowed to vote, run for office, start businesses, etc. Denying a sapient AI any of that would be just as immoral as if you were denying them to a human.
 
Maybe! Also maybe not. It really depends on the form such an AI takes. I'm unwilling to guess too hard on the specific nature of something that has never existed to the extent of our experience/knowledge.
 
Some shmoes from the Future of Humanity Institute at some place called "Oxford" just did an AMA on the reddits. Some really interesting stuff:

Here's one particularly interesting answer from AndersSandberg about alarmism:
There is much to be said for being reactionary :-) In many cases we are simply too stupid to predict the consequences down the line, so making rules and planning ahead in complex domains leads to regulations that don't fit the reality that emerges. So sometimes watching what happens and then acting is the smart thing to do.
But in some cases we cannot allow things to go wrong, especially about existential risk. One nuclear war or designer pandemic is one too many. This is where being proactive actually makes sense.
There is another kind of proactive, and that is trying out things. We need a lot of experimentation since we are too stupid to fully predict what will work or not. Small scale experiments, whether in new societies, new kinds of software security or ways of thinking, are really useful. We can learn from what works and what doesn't.
So I think we should reactive about things that we don't know how they will turn out, proactive about very risky things, and proactive about figuring out new things.
 
My logic may be off, but if machines cannot mimic behavior, is that holding them back or just protecting them and us? Even if machines could only do "good" things, and they developed consciousness, just them forcing humans to behave in a way that is against human nature would be immoral in it's own right.

Are humans afraid that there is no redemptive quality in humanity? Vice is who we are. The problem is not vice, the problem is the ill consequences and the way vice is controlled to benefit the few at the cost of the majority. It would seem that we are afraid that machines are just going to compound the problem and not solve it.
 
Back
Top Bottom