Professor Hawking to world - Fear the Reaper(s)!

bhsup

Deity
Joined
Jan 1, 2004
Messages
30,387
So yeah, it turns out Stephen Hawkin must be a Mass Effect fan.

Stephen Hawking warns artificial intelligence could end mankind

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

...

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

So is Skynet / Harbinger / HAL inevitable if we continue tinkering with AI?
 
The problem isn't going to be AI rising up and taking us out, the problems are going to be legal ones, fought out in courtrooms, between people who hate robots and don't want them to have equal rights, and those who think "that it's about time". And guess what happens as soon as something that can easily reproduce gets voting rights. Our society is going to turn to an entirely new direction and one day it may be fully directed by machines rather than human minds. I think we're heading in that direction and there really isn't a way to turn around - but what's going to be interesting (and at times dangerous) is how exactly we get there.

That's what I think anyway. Machines rising up won't happen until after all that is sorted out... or I suppose in some unlikely scenarios - when it doesn't (when machine minds don't get equal rights, like in the matrix, and rise up)
 
We should fear nuclear apocalypse and Caesar imposters. And raging gangs in the Mojave or otherwise Wasteland.
 
The problem isn't going to be AI rising up and taking us out, the problems are going to be legal ones, fought out in courtrooms, between people who hate robots and don't want them to have equal rights, and those who think "that it's about time". And guess what happens as soon as something that can easily reproduce gets voting rights. Our society is going to turn to an entirely new direction and one day it may be fully directed by machines rather than human minds. I think we're heading in that direction and there really isn't a way to turn around - but what's going to be interesting (and at times dangerous) is how exactly we get there.

That's what I think anyway. Machines rising up won't happen until after all that is sorted out... or I suppose in some unlikely scenarios - when it doesn't (when machine minds don't get equal rights, like in the matrix, and rise up)

Well then it seems I am going to be part of the problem that brings about our downfall. From 2009:
Even "intelligent" robots don't deserve rights. If Data from Star Trek were here right now, I'd have no more problem putting a 9mm into his positronic net than I would a Chevy big block.
 
I suggest reading Existence by David Brin. Set in the near future, it includes numerous 'excerpts' from Pandora's Cornucopia, a reference work of the times detailing the multitude of technological 'traps' that can destroy budding civilizations. Many of them would leave 'residuals' that would destroy any civilizations that come after in the vicinity...vicinity often being defined as 'same corner of the galaxy'.

This AI business is examined extensively.
 
We should all take Mr. Hawking aside and show him the AI in most video games, maybe he will change his mind.
 
This is something the Singularity Institute actually worries about. They've done a reasonable amount of the ground-level thinking. There are really excellent articles and lectures out there for the person who's interested. Then, it's possible to build upon this ground-level thinking to make sure we get ahead of the ball on this one.

I think AI has more short-term risks for us, namely Automation-Induced Unemployment, but that might be a different topic
 
the possibility of a mere human to design a machine which has an ability to redesign itself is a rather bold claim.
 
We don't need to do that, actually. A major theorised mechanism is that we invent evolutionary algorithms that have that as a desired outcome. We don't even need to come close to understanding how to build a bootstrapping AI, even.
 
We don't need to do that, actually. A major theorised mechanism is that we invent evolutionary algorithms that have that as a desired outcome. We don't even need to come close to understanding how to build a bootstrapping AI, even.

that is the catch. The logical way is to understand our own biology 100% before trying to invent something smarter than us. They have decoded rat's DNA and sheep DNA and what else...

we will have human bio-engineering soon enough, they say.

tl:dr - we are good at writing algorithms, but not at understanding evolution fully.

// Hawking said that humans are slowly evolving in a biologically way. What is the final destination of this? How can we accelerate our own biological evolution? etc.
 
Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

That's a rather bold claim. Especially considering we don't yet ourselves fully understand the human brain.

We don't need to do that, actually. A major theorised mechanism is that we invent evolutionary algorithms

Yes, well, those do not exist.
 
At risk of pointing out the obvious, no biological species lasts forever -- it goes extinct without leaving descendants, or it diversifies into new species (which eventually supersede/ replace the parent species on a geological timescale). So, on a geological and certainly on a universal timescale, 'humans' (Homo sapiens) are doomed anyway, it's just a question of when -- sooner or later? -- rather than 'if'...

Our last common ancestor with the chimps existed around 6 Mya. Our proto-human ancestors (which also gave rise to the other Homo species -- none of which survive today, obviously) first appeared around 3 Mya. Our species in its present form has 'only' been on Earth for around 250,000 years (~12,500-15,000 generations), and look how far we've come in that time -- from central Africa to the entire world, and from stone hand-axes to the Silicon Age. Who's to say what the next stage(s) might be?

Assuming that we have sufficient time to diversify biologically before someone pushes a Big Red Button :bump: :nuke:, I'm thinking that H. sap.'s gene pool will most likely separate along ideological/ cultural lines (e.g. 'naturally-conceived' vs. 'gene-engineered', and/or 'believers' vs. 'rationals'). Alternatively, if a sufficiently large subpopulation successfully moves off-planet (not necessarily to another solar system, nor even another planet/moon of our system -- there are plenty of asteroids that we could hollow out and spin up to make space-borne habitats), simple spatial separation of the two populations might serve as another source of an evolutionary split (planet-bound vs. spaceborne).

Or a split could be purely technological. While we don't have a thorough understanding of brain structure yet, once we become able to model all the interneurone cell-cell connections/interactions of a biological brain (which I think is now really just a question of time and computing power), then in terms of functionality, we would effectively have produced an artificial brain that was capable of doing everything that a meatbrain can do, but millions of times faster -- and without the biological weaknesses of e.g. fatigue poisons/ negative hormonal effects (fear, anger, etc.). Such technology could open the door to e.g. uploading human minds into hardware substrates, and achieving 'AI' by that route.

Such A-minds would then effectively become immortal, so long as their energy supply could be maintained. Maintaining that power would at least initially be the responsibility of the meatbrains, so (to some extent) the first A-minds would still be at our mercy (as long as no-one was silly enough to put them in charge of robotic manufacturing plants and/or ICBMs, anyway! ;) ), giving them an 'incentive' to think and plan for the long-term common good of both 'species'...
 
that is the catch. The logical way is to understand our own biology 100% before trying to invent something smarter than us. They have decoded rat's DNA and sheep DNA and what else...

It might not actually be the 'logical' way to proceed. There is a real need to create advanced intelligence to solve specific problems. There are many problems we look at that could be 'cracked' if there was enough intelligence brought to bear.

So, the push is to create intelligence as we learn how to create intelligence. We don't need to wait to learn how the brain works first. Learning how the brain works, though, certainly helps our ability to figure out how to create intelligence.
 
Going by the bit on the article in the OP, it does sound more like Hawking wanted some added momentary time on the news. I am not seeing how what he is saying signifies a development in vieiwing the issue of AI at all. If anything we still are not even concluding if actual 'conscious' AI can happen even in a future.

Personally i doubt it can. Conscience is not in some evident correlation with the vastly larger nonconscious mental world of any conscious being. It is a bit like asking one how many times does can finite number be said to be contained in an infinity.
 
It's more like asking how a bunch of non-living protein and fatty acid molecules could ever become alive. Or how a bunch of non-conscious neurons could ever become conscious.

On top of that, there's no reason to assume that an AI could never seem conscious. And it's the seeming of consciousness that can/will cause all of the solutions for which the AI was designed. And, it will also be the cause of the problems.
 
We should all take Mr. Hawking aside and show him the AI in most video games, maybe he will change his mind.
I realise that you're joking (kind of) -- but one of the mathematical rules that evolutionary theory and real-world observation demonstrates very well, is that complexity (both of 'organisms' and 'behaviours') can indeed arise from the interaction of only a few relatively simple rules + feedback mechanisms.

And you have to admit that in the Civ series, some of the AICivs (i.e. rulesets) tend to be consistently better than others at surviving/winning within the game-environment -- and I don't think it's a coincidence that the survivor-AICIvs generally tend to be the ones with enhanced reproduction (e.g. Agri-civs in CivIII!) and/or high aggression-ratings (that build lots of mil-units). What the Civ-games don't do, though, is to let those AICivs mutate and adapt (i.e. to vary their own rules) according to what 'works' best for their starting environment(s).

Imagine if an AICiv's algorithms could reproduce themselves with some random variation of certain parameters, and be played off against other rulesets. Or if an AI-routine could 'observe' the way its human opponent plays a game, and then adopt/ adapt some of those tactics itself (e.g. setting up a Settler-factory in CivIII). If it could do that, we could swiftly end up with a genuinely 'dangerous' Civgame-AI that would be a challenge to beat, without having to boost difficulty levels simply by using artificial buffs to AI-growth/ production/ research stats...
 
Well then it seems I am going to be part of the problem that brings about our downfall. From 2009:

Oh, I know your position on this. For some reason I remember it well. :p

I think you might have to face this at some point during your life.. but maybe not. I'd say it's 50/50 at this stage, depending on how long you last.
 
It might not actually be the 'logical' way to proceed. There is a real need to create advanced intelligence to solve specific problems. There are many problems we look at that could be 'cracked' if there was enough intelligence brought to bear.

So, the push is to create intelligence as we learn how to create intelligence. We don't need to wait to learn how the brain works first. Learning how the brain works, though, certainly helps our ability to figure out how to create intelligence.


Well, the cracking usually revolves around creativity of some sort. AI has none. Due to fact that it's ability to be creative is preprogrammed.

A paralel here - a mathematician Évariste Galois found a way how to solve n^5 equations around 19th century. He found it by looking at n^4 and n^3 equations.

There was no hard deduction, just looking at the structure and making hypothesis till one of them proves to be true.

// I would be more interested in figuring out how Da Vinci managed to engineer a tank and a plane five centuries earlier. What made his brain capable of such feats? Given the information he had and given the fact that he was a very zealous learner with around 200 IQ, what could such person achieve nowadays?

We have a dozen of child prodigies working at NASA, and they are thinking about how to send better robots to Mars, last time i read about them... Why is that a priority for USA? Why not cure for cancer or dealing with heart diseases?

// On other hand, every single person with a very high IQ i have met, have been an engineer of some kind. Even if it's upgrading electricity system in their house. or programming a script in their PC or mobile.

given enough raw processing power, given enough ability to see patterns through things, you naturally start seeing better, more effective ways how to do things.
 
So, the push is to create intelligence as we learn how to create intelligence. We don't need to wait to learn how the brain works first. Learning how the brain works, though, certainly helps our ability to figure out how to create intelligence.

But I am strongly of the opinion that all attempts to create intelligence will fail
until we do learn how the brain (i.e. thinking and conciousness) really works. As
evidence I cite the fact that successful heavier than air flight did not occur until
we understood the physics of bird flight, and were able to design wings and
controls that worked according to the same principles.
 
We have computers driving cars, winning at chess, winning at Jeopardy ... these were all considered 'intelligence tests' back in the day. And, we now have AIs beating these intelligence tests.

Well, the cracking usually revolves around creativity of some sort. AI has none. Due to fact that it's ability to be creative is preprogrammed.

What is 'creativity' though? It's the ability to imagine all types of potential solutions and imagine to see if they work. An AI can have that in spades. That's just a function of the database it's working off of, mostly. Well, not mostly, plus is the knowledge to manipulate things in the database.
 
Top Bottom