Hawking et al: Transcending Complacency on Superintelligent Machines

Then at that point, if you look at this as us v. Them, haven't they won? Once we are uploaded and we use silicon to think instead of meat, are we still human?

This question is sort of like my thread a while back on teleportation. If I "upload" myself, even if I still "exist" and seem conscious to outside observers, is that "me?"
 
Then at that point, if you look at this as us v. Them, haven't they won? Once we are uploaded and we use silicon to think instead of meat, are we still human?

This question is sort of like my thread a while back on teleportation. If I "upload" myself, even if I still "exist" and seem conscious to outside observers, is that "me?"

Yes you are still you and you are still human. It is your consciousness that makes you human and makes you who you are, not the physical components that carry that consciousness around.
 
I do not think that is entirely true. Your brain grows and forms as you become you. That may not be transferable but only applicable in a biological setting.
 
Is it benevolent to give some one the ability to make choices that would destroy the very concept that was intended?

Is it benevolent to hold back such an ability even if that ability would destroy that which was created?

Being benevolent is a catch 22. Because there will always be death and destruction no matter how benevolent one is. There will never be any good if the only intent was evil to start with. Evil will never produce good. However being good all the time will never eradicate evil, unless there is determinism that eradicates evil and never allows it to exist. It would seem that eradicating evil would be the end of humanity, and everything would just be a pre-programmed machine that only does benevolent things and nothing bad ever happens.

Now if you think that it is intelligence that would bring harm eventually to mankind, then perhaps intelligence is the intrinsic evil that determines the fate of humanity.
I don't think intelligence is the problem. The culprit is rationality without compassion
 
Everyone is so bothered with extinction. AI will be our children and our children will play with the Cosmos in ways we never could.
Firstly, virtually everyone I've ever heard or read discussing this topic seems to think that the singularity is some kind of panacea at the most and the Next Necessary Step in Evolution at the least. I'm in the minority that actually worries about this, it seems.

Spoiler ramble :
Second, does it really matter at all if our artificial replacements can play with the cosmos? To me, human actions and their motives boil down to survival and happiness/the avoidance of unhappiness. I think it's possible to create a wonderful world right here with most of the technology we have (actually, with most of the technology that existed centuries or millennia ago) without eliminating the human race and replacing it with a bunch of digital imitators. I think that this endless quest for more technology, money, and stuff is intended to promote happiness, but that it hasn't really worked. Else, people today, having more technology and stuff, must be living in a state of continuous orgasm, while earlier people must have been perpetually depressed and suicidal.

That's obviously not the case. People are animals with animal needs, and I'm not just talking about food and water. Just as horses need room to romp around and a herd to live in, and just as cats need something to hunt or chase to be happy, so do people have their own particular set of basic psychological needs in order to be happy. A sense of being a valuable and accepted member of a community, the sense of community itself, companionship, exercise, a sense of purpose, methods of self-expression, something to wonder at--these seem to be some of those fundamental needs. It's entirely possible to be completely content in the Paleolithic Age or medieval Siam or modern America. The Amish are a testament to the fact that more technology isn't always better. Sure, their lives aren't perfect or necessarily better on the whole than ours, but they aren't automatically worse just because of less tech, either. Toys nowadays may have all sorts of components that make noise and move and so on, but as a kid I always had the most fun playing with toys that did nothing, like toy soldiers or planes. They require more imagination, and there are more possibilities with them. Hell, my friends and I often played army and went on adventures without the help of any toys at all. We could imagine everything. My point is that the basic needs of humanity haven't changed.

Now, technology can be used to solve some of the obstacles to these things, especially medical technology. Disease is the worst enemy of humanity and always has been. It kills many and can make the rest miserable. It has to go, and technology's good for that. Technology that can be used to prevent famine and poverty is also wonderful. Many more of our problems, such as overpopulation, war, crime, and general meanness, can't be handled by technology alone, if at all, and require people to act differently. I think it's entirely possible to solve these problems or at least greatly mitigate them without giving up and letting computer programs replace our species. We've just been barking up the wrong tree, losing sight of our goals and desires, and getting sidetracked. Technology will help achieve a lot of our goals, yes, but it isn't an end in itself, and we've been misusing it pretty often and forgetting what it's supposed to do and what our real goals are.

I'm very concerned that in their quest for more stuff, immortality, and an escape from their identity and nature (all of which is only meant to maximize their happiness like any other activity), people will actually end up unintentionally wiping out the human race or at the very least making happiness difficult if not impossible to pursue for those of us who don't actually want to participate in their quest. Posthumans/transhumans/digital copies of humans might even despise what they regard as inferior meatsacks and kill us off. Though, as I mentioned earlier, it's also possible that they could instead just put their physical components on some barren, uninhabitable planet and go "live" their simulated, digital existences there without having to interact with the physical world.

Then at that point, if you look at this as us v. Them, haven't they won? Once we are uploaded and we use silicon to think instead of meat, are we still human?

This question is sort of like my thread a while back on teleportation. If I "upload" myself, even if I still "exist" and seem conscious to outside observers, is that "me?"

That's exactly one of my biggest concerns. If uploading involved shedding your physical self for a digital self, you might in effect be killing yourself and creating a digital facsimile--a very accurate copy, but not the real thing. You're still dead, but as the copy has copies of your memories, it "thinks" it's you, falsely remembers being uploaded, and everyone else believes it.

But suppose uploading didn't kill you. Then it becomes perfectly clear that the digital copy isn't you, but just a copy. That would really defeat the whole purpose of uploading, wouldn't it?
 
Then at that point, if you look at this as us v. Them, haven't they won? Once we are uploaded and we use silicon to think instead of meat, are we still human?

This question is sort of like my thread a while back on teleportation. If I "upload" myself, even if I still "exist" and seem conscious to outside observers, is that "me?"

And then copying your consciousness so that it can talk to itself, and then many times, and then tweaking some, and merging some with other consciousnesses to make super consciousnesses that our other selves can talk to or not, and life just got a lot weirder.
 
^While not the same, the argument that merging with a computer will expand our ability in ways impossible to have it expand without outside (now internal) influence, makes me recall a famous article by Baudelaire.

Baudelaire had used some drugs early on in his life, but later on he wrote a treatise arguing against the use of drugs. At the time those drugs were not illegal, and many artists used them due to being of the view that they expanded their ability, or imagination. Baudelaire claimed that in his view any seemingly added ability, was just a temporary alteration caused by external influence (in this case the drug), and one which could be formed in a more sustainable, and more beneficial, manner through actual mental development by introspection.

I too think, therefore, not only that Carthage must be destroyed, but also that no enosis of humans with machines will allow in the long run for abilities in thinking which our species could not evolve without such a unity with an external mechanism.
 
Yes you are still you and you are still human. It is your consciousness that makes you human and makes you who you are, not the physical components that carry that consciousness around.

But if the physical components change the way you think, that changes the consciousness. I think the earlier point someone made was that a silicon based thinker, for example, would think at computer speeds, and would think the way a computer thinks, even if their digital brain was stuffed with your memories, your emotions, your attitudes, etc. The computer "you" would take all that and "you" would move forward in life, enjoying rapid computational power and memory recall and probably some sort of ability to virtually inhabit basically any reality you want. So unless you intentionally "gimp" the hardware so that the digital "you" "thinks" like a person, i.e. slower and with all the limitations of our wonderful, miraculous but organic brain, "you" are still very very different and I think, arguably, no longer human. (Ignoring the sort of philosophical question of whether uploading you actually was murder and the digital copy is a new entity.)

But then we get back to square one--if there are "non-gimped" AI running around, whether they are hostile or not, they would be different from us, and they would arguably be a superior intelligence.
 
^They would not be an AI, but a human intelligence linked to computer stuff. The computer stuff would have no intelligence, like a prosthetic high-tech leg is still an external part added to a human body.

Let alone that thinking in higher speed is not really what intelligence is about, cause despite that a computer can produce the first few thousands of primes in a tiny amount of time without any issue, and a human likely can not, the computer has no knowledge of what the primes are, nor why they supposedly are different than a sprite of a basketball player in some EA games series.

A human intellect is not just its epicenter (the conscience of that moment), but vastly more importantly it is the regions which make that conscience possible, without themselves being part of it. I see no reason why a human cannot have many times the 'level' of conscience they now have. Although i am sure one can never actually incorporate a significant degree of his entire mental world into his actual conscience.
 
Just because you can't imagine it doesn't mean it's impossible. I think the odds are good that a silicon brain will become self aware at some point. Commodore said it's a matter of If, not When. I think it's a matter of When. As you know, I'm a pretty hard-core materialist (I think that's the term), and I don't think there's anything special or unique to our brain that would limit self-awareness to only our specific configuration of brain matter.
The problem is in the code. The code is being written directly by humans and thus we have control over where we allow the direct of the code to go.
Here's another little nugget to bake your noodle on: If you believe we were created by a deity, then are we not artificial intelligence? I mean, if we were created then that means we are not naturally occurring and are thus artificial. So if you go by that logic then sapient AI already exists and all you have to do is look in a mirror to find it.

But compared to animals, which as I do believe God created them also, but not like us, then you can see that not every creature has the ability that we humans have. All creatures have intelligence, but our level of intelligence is far higher and unique in earth that we can do more with what we've got than other creatures can do.
 
Let alone that thinking in higher speed is not really what intelligence is about, cause despite that a computer can produce the first few thousands of primes in a tiny amount of time without any issue, and a human likely can not, the computer has no knowledge of what the primes are, nor why they supposedly are different than a sprite of a basketball player in some EA games series.

The question is: what if a human or human-like conscious intelligence had all those computational advantages in addition to all the human stuff? If I could do mathematical equations at lightning speed in my head I certainly might perceive the world differently. If I could access prior memories at will, with perfect accuracy, that would certainly alter my consciousness, in my opinion. If I could operate in a state of hyper awareness of my surroundings with all of those things I would be different. I would think different, I would act different, I would "be" a different being that a normal human.

So I think it is fair to say that even if we avoid paying fealty to our robot overlords, that will also entail human intelligence artificially enhancing (evolving? intelligently designing?) itself. And if we don't and we still create an AI that can outpace our own un-altered intellectual capacities, (intentionally or unintentionally) then I do think there is a very valid concern that the new entity we create could decide to reconstitute our atoms into something more useful.
 
The problem is in the code. The code is being written directly by humans and thus we have control over where we allow the direct of the code to go.
:gripe: I've been reminding people over and over that the key difference between generic computer programs and AI is the *humans are not writing the code - the AI is writing it's own code* That's the whole premise of Machine Learning. Machines that are able to adjust their operations based on past experience.


But compared to animals, which as I do believe God created them also, but not like us, then you can see that not every creature has the ability that we humans have. All creatures have intelligence, but our level of intelligence is far higher and unique in earth that we can do more with what we've got than other creatures can do.
To be fair, dogs can do a lot with their intelligence that humans can't do; ants can do a lot with their intelligence that we can't do, and so on.

But I agree with your basic premise: that human intelligence is at the very least quantitatively different from non-human animal intelligences. It may also be qualitatively different, but until we have a firm understanding of our own intelligence we won't be able to compare it to intelligences like Dolphins, Social Insects, Corvids, etc.
 
I am so tired of this robot uprising meme. It's as bad as zombies.
 
I am so tired of this robot uprising meme. It's as bad as zombies.

I know right? It's not going to happen. Humans are actually pretty big control freaks which means if we develop machines capable of that kind of thought we would either put some sort of behavioral block that the AI can't change or we would find a way to isolate the intelligence so it couldn't rebel against us even if it wanted to.
 
I am so tired of this robot uprising meme. It's as bad as zombies.

And I'm so tired of people assuming that all change is progress, that all technology is beneficial and necessary, and that anyone who doesn't worship at the altar of the Latest and Greatest is some kind of quaint, mentally ill, romantic relic. It's arrogant, aimless, and shortsighted.

I'm not saying that all technology is bad, and that we should revert to the Neolithic age. But I am suggesting that maybe having the latest stuff will never satisfy us, and that the concept of "uploading" is a myth.
 
A few thoughts:

If you see the benefits of being uploaded but are worried, just do it. Then pull down the Control Box and go to Personality -> Settings -> Preloads, and select the "Digital Pioneer!" outlook package.

A far more sophisticated method: Make a million (or whatever) copies of yourself, each with small but random changes to personality. Do some accelerated-living testing, and then eliminate every copy but the happiest one. If you can't create the best possible world you can at least manufacture the happiest-possible you.

It's useful to distinguish between "general intelligence" and limited or specific intelligence. A limited AI might be a total-wiz at, say, chemistry, but not have the capability to care or even think much about anything else. A generally intelligent AI could be given a job as a chemist and then decide what it really wants to do when it grows up is star in it's own production of Cats.

Not *everyone* is nice or obeys the law because of weakness, instinct, or fear. Intelligences far smarter than humans may also be a heck of a lot more moral. Especially if, as seems likely, they're long-term thinkers.

A "rise of the machines" corollary is that AI's might suck us all up into the 'net, whether or not we want it, out of compassion. OTOH, given that, after the immediate - and perhaps gory - processing, you may never know it.

Economically, and assuming substantially less-than-infinite power, AIs might have far, far better things to do than take *your* job. Of course, they might not. OTOH, they may generate a lot of inventions or capabilities that provide jobs. Err... a lot like the computers and robots we already have.


You see a lot of nasty AI's in sci-fi because a story is supposed to have conflict. "Die, humans!" is an easy way to get it.

Frank Herbert's interesting - if somewhat mystical - twist on AIs involved the idea that we aren't really conscious. (This is from the Destination Void series.)

In John Barnes' Kaleidoscope Century series a AI/meme designed more or less as a weapon eventually, and inevitably (more or less) becomes benign (more or less) as an optimal host adaptation. In his Geriot books there are LOTS of AIs. Swarms. Many are involved keeping other AIs from rebelling. Because there's nothing that says "technological progress" more than creating useful general intelligences and then making a huge, resentful slave population from them.

I have a story (up soonish!) where AIs have tremendous manipulative power because they were designed to simulate... oh, I can't bring myself to give it away!
 
And I'm so tired of people assuming that all change is progress, that all technology is beneficial and necessary, and that anyone who doesn't worship at the altar of the Latest and Greatest is some kind of quaint, mentally ill, romantic relic. It's arrogant, aimless, and shortsighted.

This is in current popular media...where?
 
This is in current popular media...where?

I didn't say anything about popular media. But it's an attitude that's disturbingly common here and elsewhere.
 
And I'm so tired of people assuming that all change is progress, that all technology is beneficial and necessary, and that anyone who doesn't worship at the altar of the Latest and Greatest is some kind of quaint, mentally ill, romantic relic. It's arrogant, aimless, and shortsighted.

I'm not saying that all technology is bad, and that we should revert to the Neolithic age. But I am suggesting that maybe having the latest stuff will never satisfy us, and that the concept of "uploading" is a myth.

If you can have molecular manufacturing and fusion and conceivably end scarcity of all kinds, would that be so bad? Would linking every human to each other for increased empathy and to a central Democracy-AI be terrible? Would being able transfer a person to the cloud to live a fantasy lifestyle impossible in physical realm be so terrible?

Even if uploading kills your original copy, so what? Your biological impulse opposes this, but considering your body was made for living 20-30 years to pass on its genes so nature is completely ignored by our rational mind. Our rational mind is also what makes our ego, and why we think oh human fleshy bits and community and love are what makes us best and unique. Pfah, like love or communities can't exist in digital form.
 
Back
Top Bottom