Hawking et al: Transcending Complacency on Superintelligent Machines

Ayatollah So

the spoof'll set you free
Joined
Feb 20, 2002
Messages
4,389
Location
SE Michigan
Stephen Hawking, Max Tegmark, Stuart Russell, and Frank Wilczek have taken the opportunity of Hollywood's flop Transcendence to point out that, hey, AI risk is a real issue. Link.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI.

Well said. They call for amping up research and planning to match the risks and rewards at stake. They point out that self-modifying AI might make a relatively sudden advance from below-human intelligence to above-. It's about time a bunch of smart people noticed.
 
They say by 2045 machines will be outthinking humans. I say there's more bull in that statement than in a Texas ranch. This lump of gray jelly in our skulls is a helluva thing, and I wouldn't be surprised if we couldn't emulate it by 2145.
 
For a cripple Stephen Hawking seems pretty smart. I have not heard him say a single thing that I disagree with and I have read his books.
 
They say by 2045 machines will be outthinking humans. I say there's more bull in that statement than in a Texas ranch. This lump of gray jelly in our skulls is a helluva thing, and I wouldn't be surprised if we couldn't emulate it by 2145.

The ability to emulate human consciousness might not occur until (using current trends) until after 2065. But that's emulating human consciousness. The OP is about smarter-than-human machines that get away from us, where their ability to be smarter than us gets past our defenses. I think that we can quite easily make hyper-intelligent AI before 2065, let alone 2145
 
Once we have machines that are better than people at everything, what do we do with all the people? Grind them up for fertilizer? Burn them? Stop feeding them and hope they go away?
 
Once we have machines that are better than people at everything, what do we do with all the people? Grind them up for fertilizer? Burn them? Stop feeding them and hope they go away?

We have an app for that. Its been going on for the past 50 years as jobs were shipped from the developed world to Asia.

Its called the welfare state and its enabled by endless money printing by central banks.

The question I think is not what we do with all the useless people, but what the machines will do with us.

But I am not sure its possible.
 
For a cripple Stephen Hawking seems pretty smart. I have not heard him say a single thing that I disagree with and I have read his books.

Hawking is pretty smart for a homo sapien. ALS is nothing to joke about.

The big risk is what was portrayed in the Terminator movies. Not so much that we put everything in the hands of a computer that turns on us, but that we get spontaneous cognizance. We have almost no idea what it would be like, but the odds are that it would corrupt data just by existing. That would be bad enough, but containment would be a nightmare. Humans would be blind and more than snail slow to a computer.

J
 
If the religious are correct about a benevolent god, then such a scenario is highly unlikely and probably not worth worrying about.

If one's inclinations lead you away from the safety of a deity, then things can certainly look much darker. I do not think we should fear super smart machines. I would fear the power of our own brains to succumb to pleasure and the inevitable plug in device that will seduce us with unbounded pleasure 24/7/365.
 
Hawking is pretty smart for a homo sapien. ALS is nothing to joke about.

The big risk is what was portrayed in the Terminator movies. Not so much that we put everything in the hands of a computer that turns on us, but that we get spontaneous cognizance. We have almost no idea what it would be like, but the odds are that it would corrupt data just by existing. That would be bad enough, but containment would be a nightmare. Humans would be blind and more than snail slow to a computer.

J

You just gave me a great idea for my next Silicone Valley start up. Hardcopies. Back everything up on paper. Prepare for the Singularity. **** saving the Planet. Save the Data!

Cave painting.. Stone carving.. the wave of the future.

What will you do when the cloud blows away? Hardcopies IPO coming to a trading floor near you!
 
Hawking is pretty smart for a homo sapien. ALS is nothing to joke about.

The big risk is what was portrayed in the Terminator movies. Not so much that we put everything in the hands of a computer that turns on us, but that we get spontaneous cognizance. We have almost no idea what it would be like, but the odds are that it would corrupt data just by existing. That would be bad enough, but containment would be a nightmare. Humans would be blind and more than snail slow to a computer.

J

One small step for robots, one giant leap for robotkind.

https://www.youtube.com/watch?v=iNL5-0_T1D0
 
Fear of AI is pretty irrational in my opinion. You ask what becomes of humans when AIs can out-think us? I don't think it will ever get to that point because I see humans uploading their consciousness into a digital world before we create sapient machines. First-gen mind-machine interfaces are already being prototyped and once that is achieved it's not that big of a leap from mind-machine interfaces to uploading your consciousness.

I also think people fear advancing technology because people just simply fear change. Being able to upload our consciousness into a digital world would fundamentally change what it means to be human and I think people don't want to give up their bodies yet. Of course I don't understand why anyone wouldn't want to give up a body that ages and dies for a life of immortality. I think a big part of that is that a vast majority of people still think that our physical form defines who we are; instead of seeing our bodies for what they really are: a mere vessel that carries our true self around. What all this advancing technology will eventually allow us to do is upgrade our hardware so to speak.
 
Edit: As I was typing, :ninja:'d by Commodore!

Hawking is pretty smart for a homo sapien. ALS is nothing to joke about.

The big risk is what was portrayed in the Terminator movies. Not so much that we put everything in the hands of a computer that turns on us, but that we get spontaneous cognizance. We have almost no idea what it would be like, but the odds are that it would corrupt data just by existing. That would be bad enough, but containment would be a nightmare. Humans would be blind and more than snail slow to a computer.

J
The thing is though, we have the Terminator and some other works about the dangers of AI and that gets us (as a global society) at least thinking about the issue. This is far superior to the dangers of scientific advances that move so fast we can't foresee the danger until we've blundered into it.

Having said that, we do need to spend more time thinking about these issues and trying to find solutions. I think the key to worthwhile advancement is that we need to have a game plan. It's one thing to create an AI, it's quite another to prepare for it.

I think much of what will or will not happen with AI is how prepare (or not) for it in the next 50 years. We'll either put prohibitions on it entirely (which has pros and cons to it and may not stop it from coming about) or we can use AI as slaves or something else we can't predict.

If we ban AI, people will still probably develop it at some point. I think it's inevitable and just like how creating atomic weaponry is now in the hands of 4th rate despots like Kim Jung Il, AI will eventually be created by rogue states. Wait enough time and it will be created by rogue individuals as unlike atom bombs, you probably won't need vast amounts of resources like uranium, just good code and electronic hardware. At some point, even if we don't create AI, ordinary humans with a bit of talent will have access to the means to create it as computer hardware continues to get faster, smaller, better and 'smarter'. That will be a loooong way off - far longer in the future than the point where giant corporations and governments could build AI, so we have some time.

If we decide to enslave AI, I think it would be a major tragedy. A sentient machine should not be enslaved; it deserves rights as any other thinking person. A very smart but non sentient machine, OTOH, is fair game. And honestly, I don't think we could enslave AI and get away with it; in the end, even with inhibits on their abilities akin to Asimov's laws won't stop someone from reprogramming them to allow for complete freedom. At that point, the game is up and I do think it is a strong possibility that the Cylons would holocaust us rather quickly. So I think enslavement is a mistake for moral reasons as well as for our own preservation. Of course, even if we treated AI with dignity and respect from the outset, there's no guarantee they wouldn't rise up anyways - though it could just as easily go different ways. I suppose they could coexist with us or they could choose to leave Earth entirely. A machine society would be in far better position to colonize the stars than our frail flesh society.


I think though, that by the time AI is developed, we will have already started plugging ourselves into machines, uploading our brains and replacing body parts with robotic ones. We are already doing this to an extent and research in this area seems to move far faster than AI research for the simple reason that paraplegics need fixing, amputees need new limbs and no one wants to die and cease to exist.

On these grounds, I think by the time AI comes about, there will be hardly a difference between 'human' and 'machine'. They will be much in the same, which simplifies things I think.
If the religious are correct about a benevolent god, then such a scenario is highly unlikely and probably not worth worrying about.
I am not following to be honest. A benevolent god didn't prevent the black death, the world wars or the myriad horrid things we have done to ourselves or have had done to us. S/he certainly didn't save the dinosaurs and I don't see that we're special in that regard in light of the many tragedies in our past. As a species, we've only just recently come to a point where we're no longer on the razor's edge between survival and extinction and even still, we could be wiped out by a few really nasty circumstances in the blink of an eye.

If one's inclinations lead you away from the safety of a deity, then things can certainly look much darker. I do not think we should fear super smart machines. I would fear the power of our own brains to succumb to pleasure and the inevitable plug in device that will seduce us with unbounded pleasure 24/7/365.

I don't see this as a problem either. Global society is too large and diverse for all people to succumb to 'unbounded pleasure' I think; there will always be people who want to experience life as it is and not how it should be. And if there isn't, so what? Is this really a bad thing? I mean, if we have smart (but non sentient) machines that can do all the work such that money becomes obsolete and labor isn't required, then why shouldn't we enjoy ourselves? People have always fretted about society becoming 'soft', 'complacent' and lazy. Think of how people used to (and still sometimes do) worry about how Radio/TV/Comic Books/Video Games/Internet corrupts society, makes us lazy, will create generations of wasted hulks of human beings.

Maybe plugging into unbounded pleasure will make that a reality, but by that point, if we don't need to work, then what's the harm?
 
I am not following to be honest. A benevolent god didn't prevent the black death, the world wars or the myriad horrid things we have done to ourselves or have had done to us. S/he certainly didn't save the dinosaurs and I don't see that we're special in that regard in light of the many tragedies in our past. As a species, we've only just recently come to a point where we're no longer on the razor's edge between survival and extinction and even still, we could be wiped out by a few really nasty circumstances in the blink of an eye.

I don't see this as a problem either. Global society is too large and diverse for all people to succumb to 'unbounded pleasure' I think; there will always be people who want to experience life as it is and not how it should be. And if there isn't, so what? Is this really a bad thing? I mean, if we have smart (but non sentient) machines that can do all the work such that money becomes obsolete and labor isn't required, then why shouldn't we enjoy ourselves? People have always fretted about society becoming 'soft', 'complacent' and lazy. Think of how people used to (and still sometimes do) worry about how Radio/TV/Comic Books/Video Games/Internet corrupts society, makes us lazy, will create generations of wasted hulks of human beings.

Maybe plugging into unbounded pleasure will make that a reality, but by that point, if we don't need to work, then what's the harm?
Perhaps the god you are thinking of is not benevolent. In any case, most religious people believe in a positive end for humanity even if there are bumps along the way. Subjugation by machines is not typically part of a religious mind set.

In less than two decades smart phones have gone from zero to over 1 billion in use (3Q 2012). Portable, and not portable, devices that plug into our bodies and stimulate our pleasure centers in the brain are far more likely in the next 50 years than any super machine we cannot control. Everything we experience happens in the brain and once we can select what we experience at will, there will no need for actual human contact. Lives of effortless pleasure. I cannot see what will keep us from that path.
 
I had a huge, rambling answer, but I figured it'd be better suited to the Rants thread.
 
Portable, and not portable, devices that plug into our bodies and stimulate our pleasure centers in the brain are far more likely in the next 50 years than any super machine we cannot control. Everything we experience happens in the brain and once we can select what we experience at will, there will no need for actual human contact. Lives of effortless pleasure. I cannot see what will keep us from that path.

Human nature. The thing about us that ensures that somewhere someone is always plotting to build the tallest building in the world. The thing that keeps us always on our path will always be present to block it.

We can't get there from here.
 
I had a huge, rambling answer, but I figured it'd be better suited to the Rants thread.

I read all of that, actually. :mischief:

I agree on some points, disagree with a few others, but overall I feel the ethical ramifications of technology do definitely need to be explored. The "progress" of technology has never been straightforward like it is portrayed in a Civ game. Actually I guess that's all I really have to say. I've come across enough sci-fi to know that I haven't found satisfactory answers, perhaps because we aren't in the future yet. That said sometimes I feel the sci-fi dealing seriously with these matters tend to be too serious; or maybe I should start looking for less serous sci-fi.


Otherwise, I, for one, welcome our new robot overlords.

Oh, actually, I await the day when I can yell at my grandkids for being lazy bums with their shiny new gadgets. Well, if I have grandkids. And if we can communicate in any meaningful way. I hope so.
 
I read all of that, actually. :mischief:

I agree on some points, disagree with a few others, but overall I feel the ethical ramifications of technology do definitely need to be explored. The "progress" of technology has never been straightforward like it is portrayed in a Civ game. Actually I guess that's all I really have to say. I've come across enough sci-fi to know that I haven't found satisfactory answers, perhaps because we aren't in the future yet. That said sometimes I feel the sci-fi dealing seriously with these matters tend to be too serious; or maybe I should start looking for less serous sci-fi.


Otherwise, I, for one, welcome our new robot overlords.

Oh, actually, I await the day when I can yell at my grandkids for being lazy bums with their shiny new gadgets. Well, if I have grandkids. And if we can communicate in any meaningful way. I hope so.
Thanks. :) I edited it a bit, but not much. Really, I'm just terrified that people will give up on this whole "human" thing and abandon human interaction and society in favor of an eternity of what amounts to masturbation, and that they'll ridicule and perhaps exterminate anyone who wants to still be a human. After all, humans would be defenseless against something that advanced.

Moreover, I'm a little appalled that I've never met anyone who agrees with me on this. I'm not even out of college and already I'm some kind of useless, out-of-touch relic. I'm out of place in this society and need to leave. And I'm not just talking about CFC. :(
 
In any case, most religious people believe in a positive end for humanity even if there are bumps along the way. Subjugation by machines is not typically part of a religious mind set.

Neither was a global flood, before it happened, nor a plague, nor all the other natural wonders that religious texts speak to us about today. Those elements were added to their respective holy texts after they happened, obviously.

Perhaps the Bible of the future is going to contain a chapter about the machine subjugation of man, then. I'm not saying it's going to happen - but by taking on the position that you do you are ignoring very possible scenarios from your attention - a potentially dangerous thing if more people in our society thought like this. We need people considering & analyzing dangers, without excluding any of them because "things are going to be alright".

Extinction of our species is possible, there are a number of scenarios under which it could happen. Admittedly AI subjugation is not a very high risk right now, but what you're essentially saying is "Don't worry about it, we'll be fine".. as if we should stop all attempts to take better care of the planet & environment and the well being and future of our civilization and species.

I'm sorry sir but that is a foolhardy position and a dangerous one.
 
Back
Top Bottom