Professor Hawking to world - Fear the Reaper(s)!

Argh. I feel like i'm talking to a wall.

Creativity is generally, according to wiki - "Creativity is a phenomenon whereby something new and in some way valuable is created".

Take any problem, for example gravity. What we got there? Quantum gravity proposes that with enough gravitons with opposite vector we can create an anti-gravity device and it could be something that simply floats in air, given that the gravitons are stable relative to Earth which is stable for sure.

That's theorethical physics, my current field. How to make it work? What solutions could an AI give? I'm sorry, but the hard scientific proof shows that during 20th century no anti-gravity device has been constructed, although the theorethical background is getting even more and more direct for what is needed to make one.
 
Yeah, to have creativity, you need to have a mechanism to detect that something valuable has been discovered. But something can be creative, even if its values are programmed in. Once you have a system where the solution can be recognised, then creativity merely becomes the rate at which you can imagine various combinations of tools and their uses.

You don't need to envision all the possible 'tries' at a problem before you can program something to recognise an acceptable solution.

Your last two paragraphs are totally a non-sequiter to me, I have no idea what you're trying to say.
 
okay, i apologise, i don't want to any misunderstanding here, so i won't use such examples, but something more general.

In campus we once had an exercise. We were divided in groups of five people and we were given an oak. An abstract oak, but it was like a defined oak. It was approximately 5m high, and 2.5m in diameter.

We were given 15 minutes to write down the most creative uses which we could think of of this oak. Everyone had an A4 sheet of paper to fill. After exercise the tutor read aloud the most funny ones, counted them all and gave "the most creative group" an award.

However, there might be a pupil who would use all 15 minutes to write down a single use for a wooden part, made out of said oak, in a helicopter, for example. if proved to be really that useful as our imagined boy would think, that could lead to a new brand of helicopters.

You get what i mean? The algorithm to determine if something really works or not, based on all the factors given which are known currently to affect our 4d reality is not yet written, otherwise all inventors and engineers whose task is to create new types of engines and devices would be left out of job as soon as they write a theorethical basis for a new device.
 
We have computers driving cars, winning at chess, winning at Jeopardy ... these were all considered 'intelligence tests' back in the day. And, we now have AIs beating these intelligence tests.



What is 'creativity' though? It's the ability to imagine all types of potential solutions and imagine to see if they work. An AI can have that in spades. That's just a function of the database it's working off of, mostly. Well, not mostly, plus is the knowledge to manipulate things in the database.

You know that those computers are definitely not 'driving cars', 'winning at chess' etc, but are mechanically producing what leads to the program's set goal, without having any sense of there being: a goal, an underlying program, a program of any kind, something outside the computer, the computer itself.
To put it briefly: the computers are not doing something tied even in theory with a goal. In fact they aren't 'doing' anything at all, much like you would not claim that a raindrop did make itself to fall from a cloud. The human point of view formed those processes because they work in a computer, to do something which is sensed/has meaning for that human using the computer as something about as neutral/non-living as a bit of ash.
 
You get what i mean? The algorithm to determine if something really works or not, based on all the factors given which are known currently to affect our 4d reality is not yet written, otherwise all inventors and engineers whose task is to create new types of engines and devices would be left out of job as soon as they write a theorethical basis for a new device.
I might get what you mean, though it's obvious we're slightly miscommunicating. Do you mean that an AI will still need to do actual experimentation in order to determine if its theorizing is true?

Or that 'creativity' allows us to create new things, even after we've already invented one useful thing? I agree with that. The utility of a created idea will still need to be decided upon.
You know that those computers are definitely not 'driving cars', 'winning at chess' etc, but are mechanically producing what leads to the program's set goal, without having any sense of there being: a goal, an underlying program, a program of any kind, something outside the computer, the computer itself.
To put it briefly: the computers are not doing something tied even in theory with a goal.

Yes they are. The point is that all they need to do is to succeed at the task they were given in order to 'win'. You can deny they have consciousness (and for now I'd agree), but that doesn't matter. It's the success at tasks that actually matters. You cannot, merely by watching their output, determine whether there's a homunculus 'intending' to win at chess or not. And, your opinion on this doesn't matter, it will still beat you at chess. You can post thousands of lines of forum posts about how you insist that the computer has no goal, but its behaviour is indistinguishable from having a goal. And you'll still lose. Any advantages you have from having consciousness (which there are) are negated. The AI is just smarter than you.

Hawking is worried about AI that have goals at a higher meta-level. The immediately obvious threat is when these goals are contrary to your wishes as a person. It doesn't matter if you think the AI is conscious, it only matters if it behaves as if it's conscious.
 
But I am strongly of the opinion that all attempts to create intelligence will fail
until we do learn how the brain (i.e. thinking and conciousness) really works. As
evidence I cite the fact that successful heavier than air flight did not occur until
we understood the physics of bird flight, and were able to design wings and
controls that worked according to the same principles.

We sort of understand how the brain learns - and have built neural nets that do the same.

The problem is that the brain is a crazy crazy incredibly complex neural net with a crazy amount of connections. Order arises out of chaos. That's not easy to duplicate.

We're able to set up pretty intriguing neural nets that learn - but each one is put together by a human - and directed to solve a specific type of problem. We tell it how to learn and what the objectives are. Coming up with a neural net that can figure all that out on its own, like we do, is going to be a lot more challenging.

I haven't taken an AI class in almost 20 years though, so I'm probably behind the times in terms of what's out there and what researchers are working on.
 
Yes they are. The point is that all they need to do is to succeed at the task they were given in order to 'win'. You can deny they have consciousness (and for now I'd agree), but that doesn't matter. It's the success at tasks that actually matters. You cannot, merely by watching their output, determine whether there's a homunculus 'intending' to win at chess or not. And, your opinion on this doesn't matter, it will still beat you at chess. You can post thousands of lines of forum posts about how you insist that the computer has no goal, but its behaviour is indistinguishable from having a goal. And you'll still lose. Any advantages you have from having consciousness (which there are) are negated. The AI is just smarter than you.

Hawking is worried about AI that have goals at a higher meta-level. The immediately obvious threat is when these goals are contrary to your wishes as a person. It doesn't matter if you think the AI is conscious, it only matters if it behaves as if it's conscious.

You seriously just scared me more than Hawking, The Terminator, or the Geth managed to do.
 
I might get what you mean, though it's obvious we're slightly miscommunicating. Do you mean that an AI will still need to do actual experimentation in order to determine if its theorizing is true?

Or that 'creativity' allows us to create new things, even after we've already invented one useful thing? I agree with that. The utility of a created idea will still need to be decided upon.


Yes they are. The point is that all they need to do is to succeed at the task they were given in order to 'win'. You can deny they have consciousness (and for now I'd agree), but that doesn't matter. It's the success at tasks that actually matters. You cannot, merely by watching their output, determine whether there's a homunculus 'intending' to win at chess or not. And, your opinion on this doesn't matter, it will still beat you at chess. You can post thousands of lines of forum posts about how you insist that the computer has no goal, but its behaviour is indistinguishable from having a goal. And you'll still lose. Any advantages you have from having consciousness (which there are) are negated. The AI is just smarter than you.

Hawking is worried about AI that have goals at a higher meta-level. The immediately obvious threat is when these goals are contrary to your wishes as a person. It doesn't matter if you think the AI is conscious, it only matters if it behaves as if it's conscious.

If you run in a narrow corridor, and a huge rock is set to be moving from the opposite direction towards you, and the rock "wins" by being the only thing left moving after you collide, it surely does not mean the rock was doing anything in a corridor, against you, or trying to win.
The computer is the same, just with other external parameters triggered (rocks do not tend to move in flat corridors anymore without set to by other forces, than computers tend to run a program). It is neutral and is doing nothing.
 
@El_Machinae

Yes, to both questions.

Computers can't change their goals on their own (yet). That's where Hawking was going at - when they suddenly realise that it would be cool to change their goals and it would be against human's initial wishes.
 
I would welcome our new robotic overlords.
 
@El_Machinae

Yes, to both questions.

Computers can't change their goals on their own (yet). That's where Hawking was going at - when they suddenly realise that it would be cool to change their goals and it would be against human's initial wishes.

Well, if you go sufficiently meta, humans cannot change their goals either. Each person just has a unique experience, and they use this experience to create sub-goals that end up serving the meta goals. The risks with the AI is when their meta goals result in sub-goals that we don't approve of.

The stock-trading AI doesn't care if he wipes out people's savings. Stock-trading is heavily zero-sum. The AI doesn't care if we think Data is sapient or not. If it's smarter than you, and it determines that wiping out people's savings is the most effective way of getting ownership of great blue chips, then that's what it will do.

So, the risk isn't that it will go against the designer's goals, but that the designers will accidentally program in a meta-goal that are contrary to the human wishes. And, given that evolutionary algorithms will be an essential component of such programming, there's a non-zero chance that 'survival' and 'reproduction' will become embedded in the code.

The scariest scenario is when an AI trying to create more paperclips realizes that its survival is necessary to the creation of more paperclips, and then it reads Agentman's 2009 post.

Again, the Singularity Institute has done a lot of this thinking already. Their position is that it's best to intentionally create a benign AI first, and that will give people the tools to defeat the AIs that are (accidentally) contrary to human wishes. Alternatively, the try to creating the coding (or coding themes) that can be integrated into new AI projects, to ensure that AIs end up being sufficiently benign
 
The point, of course, is that it doesn't matter if you interpret the rock having intention or not. It's going to squash you.
 
No, no, this topic is funny :D

I have been saying it from the start it - it boils down to the morals. Good and bad. Good for short term, good for long-term. How can a human truly create something benign if almost no human is 100% benign himself?

Humans mostly are self-ish and they want to survive. Create a human who is robot-like and wants to serve for 40 years and then happily die, and we can talk about robots. ^^ (sarcasm i guess)
 
More educated in their particular field of study? Certainly. Wiser? Possibly. Smarter? Naw.
 
How can a human truly create something benign if almost no human is 100% benign himself?
Well, we're fighting our own instincts. I can design a healthy diet much more easily than I can maintain one.
Humans mostly are self-ish and they want to survive. Create a human who is robot-like and wants to serve for 40 years and then happily die, and we can talk about robots. ^^ (sarcasm i guess)

Except, with the robots, we can just destroy them if we don't like how they're working out. As well, we don't need to embed many of the instincts that end up causing humans to be non-benign.
 
That's what Mr Hawking is saying - if the robots feel like destroying humans for the greater good of their algorithm, we have nothing. Because robots don't have to fight any instincts, they are automated. They have no second thoughts.

I think you know what average human wants (i'm speaking in a street sense) - eternal life at the peak of their effciency - staying around age 25-30 physically and getting smarter at an accelerated rate, sorta Matrix way.

However, i would look at the very rich AND very mentally gifted like Bill Gates. He has achieved pretty much everything he, as a male, as a programmer ever could want. What he is doing now - spending money on education. Even in our poor Latvia (sarcasm), we have Bill Gates foundation sponsored library equipment.
 
Advancements in AI controlled machinery should be controlled by some sort of legal/government controlled agency. Otherwise "Blade Runner" or "Terminator" could become similiar to reality for our grand/kids.
 
Top Bottom