Hawking et al: Transcending Complacency on Superintelligent Machines

If the religious are correct about a benevolent god, then such a scenario is highly unlikely and probably not worth worrying about.

If one's inclinations lead you away from the safety of a deity, then things can certainly look much darker. I do not think we should fear super smart machines. I would fear the power of our own brains to succumb to pleasure and the inevitable plug in device that will seduce us with unbounded pleasure 24/7/365.

That's a rather shallow sort of benevolence you envision. It seems a bit ridiculous to stuff anything deserving the name of diety in that box.

J
 
Hawking is pretty smart for a homo sapien. ALS is nothing to joke about.
J

Yea. Not really a joke; inferior fetuses should be aborted, in my opinion, he is an exception to the rule. So that was an extreme compliment.

UHG. too many words in comments. me lost.
 
What could be more reliable than paper!

Stone ?


Once we have machines that are better than people at everything, what do we do with all the people? Grind them up for fertilizer? Burn them? Stop feeding them and hope they go away?

The Libertarians and Republicans will deport them all and ban them from marrying.
Meanwhile in Japan, will have the first Robot government. We love Miku !
 
I'm sorry sir but that is a foolhardy position and a dangerous one.

Did you say sir to a mod?

Threatening to seize 40 billion dollars from the Swiss bank account of the former KGB colonel who currently controls Russia's nuclear arsenal is a foolhardy position and a dangerous one. Not worrying about runaway AI in 2014 seems rational in relative terms.

Did you say sir to a mod?
 
Yea. Not really a joke; inferior fetuses should be aborted, in my opinion, he is an exception to the rule. So that was an extreme compliment.

UHG. too many words in comments. me lost.

Hawking was a normal person until his college years. He contracted a disease (amyotrophic lateral sclerosis, ALS), which crippled him.

J
 
I am not sure of humanity's capability to program an intelligence more intelligent than itself when it doesn't yet comprehend the nature of its own intelligence. If we did it would probably have some sort of critical flaw which may or may not be good for us.

Also, if they make an AI couldn't they just keep it in an isolated room and not plug it in into any sort of network? Whoever does that would be quite a dummy in my opinion.

I'm also not entirely sure of the applications of AI at the moment, in why exactly you'd make something with a high degree of creativity if you only need it to focus on a certain task like we have with machines at the moment. It would make sense for it to learn how to perform a task more effectively but if you have a car-building robot wasting processing power thinking about Shakespeare it would be quite annoying. I almost think that if we do develop it, AI would be some sort of accident, not deliberately induced.

While writing the previous paragraph I was thinking about how with human intelligence, the end is (at least to me) survival, but as far as I know with AI there isn't really such a motivation. If an AI were developed it would probably have a focus on efficiency, which might lead to an awkward relationship with humanity.

I think it would be neat to build an AI that couldn't connect to the internet at large and could learn how to perform physical tasks more efficiently over time (though programmers would have to define "efficiency") which means that it would still be quite useful for us, and over time develop pathfinding and communications systems between such AIs, ending up with, I suppose, a robot slave army.

EDIT: Oh yeah, we might also have to "teach" AIs like they talked about in SMAC. Which would be interesting to compare to the teaching of humans.

I also forgot to mention that I think another reason for developing AI is if the coding for a task just got too complicated and making a learning program would just be easier (somehow). The wasting of processing power might just become a natural part of AI progression that way as it learns about its surroundings. It would take a while to be effective, but would probably be more cost effective in the long run. It would need to be able to make judgement calls in situations based on its purpose that we might disagree with though.
 

I'd say stone is pretty reliable, though it does suffer from wear... but I guess generally still longer-lasting than paper?

I think on one of those "what would happen if people disappeared" shows, it claimed that the Pyramids will be the evidence of humanity that'd last the longest. Not sure how accurate such a claim would be, but makes sense to me.


The Libertarians and Republicans will deport them all and ban them from marrying.
Meanwhile in Japan, will have the first Robot government. We love Miku !

I, for one, look forward to marrying my robot wife.

Or, well, maybe that'll be for my grandson.

Then again there was that Japanese guy who married a video game character, though I heard the marriage is technically not legally binding. Yet.
 
Okay, so lets make a real world connection. What about these HFT algorithms that are making most of the equities trades at the present time?

Is this the sort of place where some morons might tinker with our fate by opening Pandora's box?Is this a form of AI? How could we know exactly what is going on there?
 
Well a stock-market AI is basically the perfect capitalist, thinking about it. Completely selfish with no human ties. It's still controlled by a human though, in the sense that they decide what to do with the money accumulated. An interesting story to write would be about a guy/company who/which got rich using them, and then when they tried to spend the money they get ruined by the AI for wasting it or something.

Nothing would stop them from pulling the plug, though. I suppose a subplot might be a rival company infecting the AI with a virus to gain control of the protagonist but then the AI has some sort of self-defense mechanism which means it ends up realizing its own vulnerability and manipulates the market to ensure its survival so that it's never unplugged.

Then depending on whether or not the author wants the story to end happily it either leads to AIs taking over the world or the human protagonists outsmarting it.

Can someone with a better understanding of economics write this story? :)
 
Neither was a global flood, before it happened, nor a plague, nor all the other natural wonders that religious texts speak to us about today. Those elements were added to their respective holy texts after they happened, obviously.

Perhaps the Bible of the future is going to contain a chapter about the machine subjugation of man, then. I'm not saying it's going to happen - but by taking on the position that you do you are ignoring very possible scenarios from your attention - a potentially dangerous thing if more people in our society thought like this. We need people considering & analyzing dangers, without excluding any of them because "things are going to be alright".

Extinction of our species is possible, there are a number of scenarios under which it could happen. Admittedly AI subjugation is not a very high risk right now, but what you're essentially saying is "Don't worry about it, we'll be fine".. as if we should stop all attempts to take better care of the planet & environment and the well being and future of our civilization and species.

I'm sorry sir but that is a foolhardy position and a dangerous one.
The Christian god of the fundamentalists bible is just one of many. My point was/is that any world view that contends that humanity has a special place in the cosmos beyond the luck of evolution and natural selection, will not expect or plan for some unglorious end to all people. It is not the nature of most religions. Some will suffer and others will not. Those who trust only in humanity to save human kind from itself most easily see and fear our destruction through inaction, stupidity and greed. I think you will need both types to keep humanity intact.

Those who put their trust in something outside of the physical world tend to have hope and faith that the path we seem to be on actually leads to a different end. Those without that faith think they see the world more clearly and are rightly afraid of what they see on the horizon.

I have never been very good at predicting the future and don't see any reason to start now. The world is already a dangerous place with more pain and suffering than we can deal with. My time is better spent in the here and now in the world I helped make doing what I can to keep it kinder. The young people (my kids included) need to begin building their world of tomorrow to their liking and not mine. I won't stand in their way.

Will the earth survive? Of course it will.
Will humanity survive? Probably.
Will the world be a better or worse place for people 100 years from now? I suspect the answer will be mixed. What percent of the world's population today is mostly miserable? What percent appears to be miserable when compared to the top 15%?
 
I am not sure of humanity's capability to program an intelligence more intelligent than itself when it doesn't yet comprehend the nature of its own intelligence. If we did it would probably have some sort of critical flaw which may or may not be good for us.
From what I understand, most ai research involves machine learning and, basically, a computer program that can rewrite its own code.


I'm also not entirely sure of the applications of AI at the moment, in why exactly you'd make something with a high degree of creativity if you only need it to focus on a certain task like we have with machines at the moment. It would make sense for it to learn how to perform a task more effectively but if you have a car-building robot wasting processing power thinking about Shakespeare it would be quite annoying. I almost think that if we do develop it, AI would be some sort of accident, not deliberately induced.
Think about Watson, arguably the most powerful AI currently. It was originally tasked with learning how to win at Jeopardy. Then they repurposed it to assist in medical diagnoses. It's teaching itself, learning from the vast depths of medical literature, patient outcome data, etc. It wouldn't surprise me in the least if they could add another functional area to his arsenal, instead of reworking him and losing the medical uses.

OnTheMedia, a radio program from WNYC.org, had a segment on AI this week (you can stream it from their website). A Dutch researcher, talking about computers being smarter than humans, Saud it's like saying that submarines swim better than people. They are two different things, even though in each case there are some superficial similarities.
 
AI needs to be controlled very tightly, human intelligence has never been kind to anything it deems inferior so I dont know why anyone would run off the assumption that an AI, much less one designed by humans, would be benevolent.
 
Not worrying about runaway AI in 2014 seems rational in relative terms.

When would be a good time to start thinking about it? All he's saying is "Let's be careful, yo". I don't see why we shouldn't be careful. I mean, we are after all working towards creating true artificial intelligence. It seems rational to consider the risks, even if they may not happen for another 50 years.
 
If we had another sentient species to deal with, and one different enough from us (i.e. not just like Neanderthals - I assume if Neanderthals somehow survived to the modern era they wouldn't fare too badly in our society, relatively speaking), I suppose we would have some experience and knowledge to go on when dealing with powerful AI. But we don't. If, and when, AI becomes that powerful, we'll be dealing with something we've never dealt with. The closest we've come to this sort of thing was when human populations who had no previous contact suddenly came into contact - i.e. Europeans in the Americas, etc. - but even then we were all just humans so there was some predictability.
 
Thinking on Terminator... Is really dangerous if computers outsmart us? I mean we have the main motivation of surviving as specie, to reproduce and to eliminate any possible danger for us or our descendency. It seems we and life in general is programed that way. Would a sentient machine have such motivations too if we havent programed it that way? Would develop it by itself Why?
 
Is it benevolent to give some one the ability to make choices that would destroy the very concept that was intended?

Is it benevolent to hold back such an ability even if that ability would destroy that which was created?

Being benevolent is a catch 22. Because there will always be death and destruction no matter how benevolent one is. There will never be any good if the only intent was evil to start with. Evil will never produce good. However being good all the time will never eradicate evil, unless there is determinism that eradicates evil and never allows it to exist. It would seem that eradicating evil would be the end of humanity, and everything would just be a pre-programmed machine that only does benevolent things and nothing bad ever happens.

Now if you think that it is intelligence that would bring harm eventually to mankind, then perhaps intelligence is the intrinsic evil that determines the fate of humanity.
 
If we had another sentient species to deal with, and one different enough from us (i.e. not just like Neanderthals - I assume if Neanderthals somehow survived to the modern era they wouldn't fare too badly in our society, relatively speaking), I suppose we would have some experience and knowledge to go on when dealing with powerful AI. But we don't. If, and when, AI becomes that powerful, we'll be dealing with something we've never dealt with. The closest we've come to this sort of thing was when human populations who had no previous contact suddenly came into contact - i.e. Europeans in the Americas, etc. - but even then we were all just humans so there was some predictability.

But we DO have other sentient species around, and our track record is utterly abysmal. We exploit them up to (and perhaps beyond) extinction.

Dolphins, Bonobos, Elephants, corvids, some select parrots - all of these have demonstrated "intelligence" and emotional awareness. I've talked about this before, specifically in the Personhood threads.

Time after time human actions and failure of restraint show that we are evil when dealing with non-human sentience.

Why should we assume we'll treat machinehood any differently?
 
But we DO have other sentient species around, and our track record is utterly abysmal. We exploit them up to (and perhaps beyond) extinction.

Dolphins, Bonobos, Elephants, corvids, some select parrots - all of these have demonstrated "intelligence" and emotional awareness. I've talked about this before, specifically in the Personhood threads.

Time after time human actions and failure of restraint show that we are evil when dealing with non-human sentience.

Why should we assume we'll treat machinehood any differently?

Ah, that's true. I forgot about them dolphins. Also Octopus, don't forget Octopus. But I'd say the dolphins, for instance, don't have the mass productivity we do, that is, the ability to enforce large scale power; I assume in the machines-taking-over-the-world sort of situation the sentient machines would. But perhaps this is a matter of perspective I'm not seeing here.


All that said I can see how there will be definite similarities when dealing with machine sentience compared with animal sentiene.
 
I'd say stone is pretty reliable, though it does suffer from wear... but I guess generally still longer-lasting than paper?
Way, way, way longer than Paper. Paper is not very reliable, under the best of conditions. On top of that, it's bulky enough that even if you're going to build a place where you can have those proper conditions, that's going to fill up fast, and you're going to have a fairly large building that you now have to protect from wear and tear (and fire) and hope nobody wants to use for anything else.
 
Back
Top Bottom