Hawking et al: Transcending Complacency on Superintelligent Machines

Everyone is so bothered with extinction. AI will be our children and our children will play with the Cosmos in ways we never could.
 
I'm saying we will develop the ability to upload our consciousness into a digital world long before we develop a truly sapient AI.

I tried to argue against that: that's where "trying to duplicate the trillions of synapses in the brain" comes in. You can't just throw a lot of computing power together and declare the result "you". Your brain and my brain are comparably complex, but we're different people. Your personality is stored in large, extended portions of your brain, and depends on very fine details of neural structure. The brain learns by re-working its structure. We don't even understand some of the principles by which the brain leverages structure into thought and feeling. I think we're an insanely long way from being able to "read out" the relevant data from your brain structure. So much so, that it's more probable that powerful AI will be reached by alternate routes first.

When we designed planes, we didn't make them flap their wings. We found an alternate approach that worked better. When we designed submarines, we didn't make them swish their tails. Evolution finds local optima after starting at one particular point and climbing the nearby hills. But intelligent designers can survey a wide variety of starting points, and choose the few that show the most promise of developing into a system that does the job.

Existing computers have a nice clean separation between hardware, operating system, and data. The brain embodies its "operating system" and "data" together indiscriminately, in its hardware. Computer data flies around at a good fraction of the speed of light. Information travels much more slowly in the brain. TLDR, there are enormous differences, not all of which favor the brain. What if highly flexible, goal-optimizing (i.e. "intelligent") behavior becomes possible with something clearly derived from today's computer architectures? It's likely, in that case, that the meat part of any cyborg will become the bottleneck: the slowest thinking in the total package.

If we create it then we can also build in behavioral blocks to prevent it from doing anything we don't want it to do.

We don't understand humans well enough to do that to humans, and I doubt we'll understand AI much better.

You can also limit an AI's ability to "evolve" by deliberately putting it in hardware that only has the capacity to handle the functions it was intended to perform.

If evolution is outlawed, only outlaws will evolve. (Rogue nations and corporations.)

It may "think" like you, act like you (minus those apparently "inferior" animal urges and needs), it may even have your memories, but it's no more "you" than a twin, and a society that embraces singularity seems to be killing itself off so that computer programs can simulate happiness. I find that pointless and horrifying.

I'm about 75% with you there. I dissent by 25% because I think it's possible in principle to get consciousness, emotion, personality traits, etc. just by having the right information moving in the right ways. But it may not be possible in practice. And it almost surely isn't going to happen, at least not fast enough to matter (for some of the reasons I gave vs. Commodore, above).

But is sentience necessary for AI? I don't think so. I think you can have sentience without the "intelligence" part, and you can have AI without sentience.

Agreed. The important part to note is that an insentient AI can still take your job.
 
Why is AI something else? Why can't "we" just become "it"? If that involves eliminating the meat portion of ourselves entirely for technical reasons, and the unique "humanness" that derives from that (whatever that may be), that may sound really bad to us now but in 100 years? 500 years? Who knows where we will be as a society morally and ethically.
 
Stephen Hawking, Max Tegmark, Stuart Russell, and Frank Wilczek have taken the opportunity of Hollywood's flop Transcendence to point out that, hey, AI risk is a real issue. Link.



Well said. They call for amping up research and planning to match the risks and rewards at stake. They point out that self-modifying AI might make a relatively sudden advance from below-human intelligence to above-. It's about time a bunch of smart people noticed.

This is a subject of interest for me. Briefly some of the views i regard as problematic with their above bit of a claim:

Currently there is no "AI intelligence". It is not like we have AI with the intelligence of a cow, or even an ant, and there is the prospect that the AI gets to the intelligence of a Human. The AI has zero intelligence by now given that it is not aware that it is doing anything (including the most 'basic' sense that it actually is there at all). If you pick up a stable object and throw it to the ground, it will have an effect on the ground, and obviously not cause it willed to have one. It may break the ground in part, or itself may break to some degree. It may be structured in a number of ways, but it still did not will anything, and its properties are subject to heavily alter or not due to other laws it had nothing to do with, nor could it sense their action on it, and neither any relation between its current state, those laws, or any formation it consisted of.

Humans have never actually manufactured (as a creator) an intelligence. Even if in the near future human computers are linked to parts of dna, this will still not be a creation of an AI for the same reason that if you squash an ant's legs and glue some plastic legs to it, the plastic won't be interacting with the ant. At best it would be mechanically (passively and automatically) reacting to some phenomena not sensed but programmed in it to force a reaction, eg the legs may move, but it still will just be a piece of plastic with some human coding and circuits to run it. Even the maimed ant will be a genius next to that prosthetic.

Anyway, in my view it is pretty much impossible that actual 'created AI' will happen here. It is - on the other hand - very likely that some dna-computer hybrids will be experimented upon and produced, but trying to effectively turn that into an actual intelligence is like an elementary school pupil seeing a computer for the first time- and instead of using code, typing some sentences in English and asking it to reply.
 
Okay, but again I think the development of AI is a matter of if and not a matter of when. I really think we will merge with machines before we create a synthetic intelligence capable of what you describe. And once we merge with machines there really is no need to develop AI at that point.
Sure there would be, humans merged with machines would still be human and wouldnt necessarily have the extreme computational power that makes AIs so appealing as human computational power would still be limited by the human brain.
 
I think you're wrong to say the most basic sense is "that you're there". I think that's the essence of self awareness, of consciousness, and it's extremely rare in the animal Kingdom. You mentioned an ant - ants have a sort of intelligence but almost certainly no self awareness.
 
I think you're wrong to say the most basic sense is "that you're there". I think that's the essence of self awareness, of consciousness, and it's extremely rare in the animal Kingdom. You mentioned an ant - ants have a sort of intelligence but almost certainly no self awareness.

Which is why i made no mention of 'self-awareness'. I did not mean that an ant is likely to be sensing that itself exists. But it seems fairly likely that the ant senses. What does it sense? Who knows. Pheromones may be sensed in some strange way by that creature, but given experiments show it does react to them (and even in what is viewed as set ways) it has to be assumed that it has a sense that something is actually going on.

That, at the most basic level, means that it actually senses. It does not matter if it makes any distinction. What matters is that it senses. A machine does not sense, and while there are infinite ways to get to any number if you start from virtually any other number as the epicenter of the alteration, it is quite probable that if you start with absolute zero as the epicenter you cannot get anywhere else.
 
Well, you're just dead wrong that machines can't sense. I'm typing this on a pocket computer that can see and hear, know if it's moving through space, know it's orientation to Earth's magnetic field, know if it's being held to my face, and know if it's being dropped.

At work I sometimes make little machines that have to sense changes in their environment.

Machine can sense, analyze, and then take action based on all that. That's been true for hundreds of years.
 
Well, you're just dead wrong that machines can't sense. I'm typing this on a pocket computer that can see and hear, know if it's moving through space, know it's orientation to Earth's magnetic field, know if it's being held to my face, and know if it's being dropped.

At work I sometimes make little machines that have to sense changes in their environment.

Machine can sense, analyze, and then take action based on all that. That's been true for hundreds of years.

At first i thought you were making an uncharacteristic joke (or socratic irony) in expectation of a new conclusion, but:

-Surely the way you used the term "sense" in you post above is itself not the one having to do with the whole 'intelligence' issue. I mean if you use "sense" as "is programmed to passively react to something/show up some info when it reacts" etc, then you are (in my view) far better off using instead of the loaded term "sense", a term like "reacts". I use this term cause (obviously) it was used since aeons ago for other sorts of changes which happen without any intelligence being there in the objects which change through them. As in a chemical reaction. If you near a flame to a piece of paper it will start to burn, but surely it is a pleonasm to say that the paper was programmed to be burned and so it did. Likewise, if i click on my new 40K Byzantine Army in EUIV, and order it to attack some nasty forces in regions i wish to kindly liberate from their tyranical rule ( ;) ), it won't move cause it got a wireless telegraph message from myself and decided to follow my order. It will move cause itself is nothing from its own point or view. The sprite of the 40K Army, the game-map, my glorious Empire, the enemy nations, they do not signify anything at all to the computer cause it does not examine any of that at all.

If i was in 1122 and ordered the actual Varangian Guard to storm the Warrior-Wagons of the Pechenegs, then they would comply to me, their Komnenian Emperor. When i am in a game and order the analogous attack, i am the only one there thinking of an attack, Pechenegs, armies, Dane Axes. Those Varangians on the screen, if they were someone out of it, in my room and i sent them to chop someone down, well, they would have already done what they would have done anyway regardless of whether I was myself or a collection of parameters which when linked to their code signified the affirmative response to my command.
 
Currently there is no "AI intelligence". It is not like we have AI with the intelligence of a cow, or even an ant, and there is the prospect that the AI gets to the intelligence of a Human. The AI has zero intelligence by now given that it is not aware that it is doing anything (including the most 'basic' sense that it actually is there at all).


This is what I was reacting to. Machines can definitely sense that they are doing something. That's the whole point about limiters and feedback loops.




At first i thought you were making an uncharacteristic joke (or socratic irony) in expectation of a new conclusion, but:

-Surely the way you used the term "sense" in you post above is itself not the one having to do with the whole 'intelligence' issue. I mean if you use "sense" as "is programmed to passively react to something/show up some info when it reacts" etc, then you are (in my view) far better off using instead of the loaded term "sense", a term like "reacts". I use this term cause (obviously) it was used since aeons ago for other sorts of changes which happen without any intelligence being there in the objects which change through them. As in a chemical reaction
I mean sense as in a microprocessor gathering input from sensors. React has a passive sense to it (like paper reacting to flame), but if that flame was near a photocell then a voltage change would be induced, and the microprocessor could be programmed to change its state (either internal electronic state or external physical state) based on the voltage value until a target state us reached.

I'm not calling that intelligence in the strong AI sense, but it's clearly an example of a machine doing something, and being aware that it's doing it.
 
This is what I was reacting to. Machines can definitely sense that they are doing something. That's the whole point about limiters and feedback loops.





I mean sense as in a microprocessor gathering input from sensors. React has a passive sense to it (like paper reacting to flame), but if that flame was near a photocell then a voltage change would be induced, and the microprocessor could be programmed to change its state (either internal electronic state or external physical state) based on the voltage value until a target state us reached.

I'm not calling that intelligence in the strong AI sense, but it's clearly an example of a machine doing something, and being aware that it's doing it.

But what is, in essence, so distinct between the two changes (paper burning, and some created machine being triggered to change due to the same external object, a flame)?

Surely the paper will burn, and the machine may be triggered by the flame's parameters to alter in other ways. But why are you of the view that the latter is less passive than the former?

(one could always add some oil to the piece of paper, triggering a somewhat different reaction, or some other substance with even more different result. Why would programming/creating a machine so that the effect triggered would be different each time, make it less passive than what happens to the piece of paper?).
 

(There was an added part before your reply- the one in parenthesis in my previous post- which possibly would be making it more evident that the actual balance state or other end state of the effect does not have to make it be deemed as different than in the case of a non-programmed object).
 
I don't really understand what you just wrote, so let me put it another way:

"what is ... so distinct between the two changes (paper burning, and some ... machine being triggered to change due to the same external object, a flame)?"

In the case of paper burning in the presence of a flame it's a simple thermodynamic reaction that rides the downward entropy gradient. Entropy is lower after the interaction.

In the case of a microprocessor performing a function upon being triggered by the flame the entropy may or may not decrease. Actually, I *think* it decidedly doesn't increase, but I'm not at all a theoretician, as is obvious! :lol:


Or to look at it another way, presenting a flame to a piece of paper (whether or not it's oiled) will definitely result in the paper burning. But presenting the flame to a sensor attached to a microprocessor will have various results depending on the software the computer is running. From the outside it's unpredictable, even though the outcome is entirely predictable to the programmer.

And with AI, it's the program itself that is adjusting the code based on past experience. No human input required. That's the key difference.
 
I don't really understand what you just wrote, so let me put it another way:

"what is ... so distinct between the two changes (paper burning, and some ... machine being triggered to change due to the same external object, a flame)?"

In the case of paper burning in the presence of a flame it's a simple thermodynamic reaction that rides the downward entropy gradient. Entropy is lower after the interaction.

In the case of a microprocessor performing a function upon being triggered by the flame the entropy may or may not decrease. Actually, I *think* it decidedly doesn't increase, but I'm not at all a theoretician, as is obvious! :lol:


Or to look at it another way, presenting a flame to a piece of paper (whether or not it's oiled) will definitely result in the paper burning. But presenting the flame to a sensor attached to a microprocessor will have various results depending on the software the computer is running. From the outside it's unpredictable, even though the outcome is entirely predictable to the programmer.

And with AI, it's the program itself that is adjusting the code based on past experience. No human input required. That's the key difference.

Ok, but why do you regard it as essentially something different than a reaction, and closer to 'intelligence'? I mean if you have a cylindrical tube, and a sphere which just fits into it, and you push the sphere into the tube, the sphere will move (or not) depending on physics. It may fall till the end of the tube, may stop at some point if met with resistence or a lessening in the volume of the space inside the tube, and so on. What you are suggesting seems to me to amount to:
'If we have a tube which alters its shape based on some info unknown to someone not having coded it or calculated the code, (or even, at best, not being able to calculate the change now, but the change actually being a result of the previous code anyway, and not a result of an intelligent alteration on the part of the coded object) then the tube stops being a tube at some point, and is now intelligent'.

Which seems very wrong.
 
And, btw, i am of the view that humans can indeed make such an object, ie one which changes in manner not calculable by humans, in any circumstance. But this just means it would be still an innanimate object, changing in a manner forced on it by some program. It does not mean at all that at some point it will start being worthy of being termed 'intelligent'. It may be a very interesting toy or prop for scientific experiments (i am sure it would), but it would have zero intelligence.
 
This is what I was reacting to. Machines can definitely sense that they are doing something. That's the whole point about limiters and feedback loops.

I mean sense as in a microprocessor gathering input from sensors. React has a passive sense to it (like paper reacting to flame), but if that flame was near a photocell then a voltage change would be induced, and the microprocessor could be programmed to change its state (either internal electronic state or external physical state) based on the voltage value until a target state us reached.

I'm not calling that intelligence in the strong AI sense, but it's clearly an example of a machine doing something, and being aware that it's doing it.

This begs to answer that the machine was programmed to "act" that way. I am not sure that humans can be self aware until they are masters of their own thoughts. For perhaps they are just programmed machines that cannot change their programming, but only act in a pre-determined way.
 
Sure there would be, humans merged with machines would still be human and wouldnt necessarily have the extreme computational power that makes AIs so appealing as human computational power would still be limited by the human brain.

Well my idea of humans merging with machines is us uploading our consciousness into a digital world. Once that happens we no longer become limited by the processing power of our biological brain. If that's how it happens then we have no need to develop AI.
 
Back
Top Bottom