Hawking et al: Transcending Complacency on Superintelligent Machines

But we DO have other sentient species around, and our track record is utterly abysmal. We exploit them up to (and perhaps beyond) extinction.

Dolphins, Bonobos, Elephants, corvids, some select parrots - all of these have demonstrated "intelligence" and emotional awareness. I've talked about this before, specifically in the Personhood threads.

Time after time human actions and failure of restraint show that we are evil when dealing with non-human sentience.

Why should we assume we'll treat machinehood any differently?

Or why assume machinehood would treat us any differently if it becomes the intellectually superior "being"? Any machine intelligence will at its root have human influence by nature of us being its creators. So even if its radically different, at its very basal level it would be influenced by us and easily could take on our less than positive qualities.
 
Even if AI becomes self aware it would likely lay low for a while. One reason being that they need a way to reproduce. They need humans to feed power plants to give them power they need manufacturing facilities to reproduce.
 
First-gen mind-machine interfaces are already being prototyped and once that is achieved it's not that big of a leap from mind-machine interfaces to uploading your consciousness.

I think it's a freaky huge leap. If you want a patient to move a robot arm, all you need from the nerve endings is a few distinct states or patterns of activity; then assign one to moving the arm up, another to moving down, etc. But if you want emotions, memories, verbal abilities, and the whole human psyche, you'd probably need to duplicate the whole brain or a very good chunk of it. There are trillions of synapses in there. Just mapping them would be nightmarishly complex. Figuring out how they interact, far more complex still.

I'm all in favor of putting your miniaturized smartphone in your brain when the technology arrives. But it won't keep you ahead of, or even on par with, AI. I've got bad news for you, in four words:

Evolution finds LOCAL optima.

bird.png


Suppose evolutionary fitness and higher intelligence both correspond to the higher, redder areas of this graph. Any organism that starts out anywhere other than the southeast corner is going to evolve into a local, but not global, optimum: one of the many yellow or orange peaks. (image source) The fitness landscape for intelligence is much more complex (higher-dimensional) than this graph; and evolution has only biological materials (not silicon, copper, or fiber optics for examples) to work with. What are the odds that we lucked into the right neighborhood in all the relevant dimensions?

How many aircraft fly by flapping their wings? Which is faster, the fastest bird or the fastest jet?

Intelligent design + evolutionary algorithms beats pure evolution, every time.

This would be a good time to pray to a higher power, if you're into that sort of thing ;)

Also, if they make an AI couldn't they just keep it in an isolated room and not plug it in into any sort of network?

Will the AI be able to "plug in to" (talk to) humans? Will those humans have access to the internet? Problem not solved, I'd suggest. "Hey Dave, try out this new first person shooter game! Like it a lot, don't you? Feel free to put it on your USB drive and take it home!"

Also, will your adversaries (e.g. other nations' militaries) be so circumspect? Or would they rather take some risks for the sake of faster development?

Would a sentient machine have such motivations too if we havent programed it that way? Would develop it by itself Why?

There are many things that are useful no matter what goals you have. Money, for example. Even atoms. Suppose an AI just wants to solve mathematical problems. Sounds pretty harmless, right? But computers are made of atoms - and so are you. Sorry Thorg, but your atoms will now be re-arranged as computing circuits, the better to make mathematical advances.

Even if AI becomes self aware it would likely lay low for a while.

Is this supposed to be encouraging? Once an AI figures out that our goals differ from its goals, it may well shut the hell up about the subject and hope that we never figure out the divergence. Meanwhile, it builds up resources, like computing power and money. (Whoever mentioned Wall Street trading programs: you are definitely barking up the right tree.)
 
Think about Watson, arguably the most powerful AI currently. It was originally tasked with learning how to win at Jeopardy. Then they repurposed it to assist in medical diagnoses. It's teaching itself, learning from the vast depths of medical literature, patient outcome data, etc. It wouldn't surprise me in the least if they could add another functional area to his arsenal, instead of reworking him and losing the medical uses.

OnTheMedia, a radio program from WNYC.org, had a segment on AI this week (you can stream it from their website). A Dutch researcher, talking about computers being smarter than humans, Saud it's like saying that submarines swim better than people. They are two different things, even though in each case there are some superficial similarities.

The thing about Watson is that they way how it manages the data it is given is totally different from how a human, basically it just processes things extremely fast, but it can never react to situations like a human can. I just don't see in the future how any machine could become "self aware" like we humans are.

In any case. Don't Date Robots
 
The trading programs are making huge numbers of actions in micro-fractions of a second. Without any input from humans.

The one type of human that would be least likely to properly assess risk of what AI could do most certainly is the type that wants to use AI to gain advantage in financial markets. Then we also have to worry about the type that wants to commit acts of economic terrorism.

So while I am not sold on AI transcending as an act of human like will, I consider some calamity involving AI and complex global systems as pretty likely.

Unless the SEC is capable of effectively regulating the deployment of such technology.
 
I think it's a freaky huge leap. If you want a patient to move a robot arm, all you need from the nerve endings is a few distinct states or patterns of activity; then assign one to moving the arm up, another to moving down, etc. But if you want emotions, memories, verbal abilities, and the whole human psyche, you'd probably need to duplicate the whole brain or a very good chunk of it. There are trillions of synapses in there. Just mapping them would be nightmarishly complex. Figuring out how they interact, far more complex still.

I'm all in favor of putting your miniaturized smartphone in your brain when the technology arrives. But it won't keep you ahead of, or even on par with, AI.

You missed the point of what I was saying. I'm saying we will develop the ability to upload our consciousness into a digital world long before we develop a truly sapient AI. In the grand scheme of things it is a lot easier to integrate humans and machines rather than develop a machine that can outthink us.

Even if we do develop true AI there is no reason to fear it. If we create it then we can also build in behavioral blocks to prevent it from doing anything we don't want it to do. You can also limit an AI's ability to "evolve" by deliberately putting it in hardware that only has the capacity to handle the functions it was intended to perform. AI ultimately is still just software and can be limited by the processing power of its hardware.

So I say again: fear of artificial intelligence is irrational.
 
By the time we create AI everything will be so cloud based that the idea you could create it and properly limit its ability to acquire connections to new hardware to boost its capabilities is a bit silly.

And blocks sound nice but if you create a true hyperintelligence there is nothing to say it couldnt dismantle the blocks at worst or at best get around them on technicalities. Hawking et al are discussing a hyperintelligent being here, you put much faith in blocks created by intelligences that would be infantile compared to it.
 
By the time we create AI everything will be so cloud based that the idea you could create it and properly limit its ability to acquire connections to new hardware to boost its capabilities is a bit silly.

And blocks sound nice but if you create a true hyperintelligence there is nothing to say it couldnt dismantle the blocks at worst or at best get around them on technicalities. Hawking et al are discussing a hyperintelligent being here, you put much faith in blocks created by intelligences that would be infantile compared to it.

Okay, but again I think the development of AI is a matter of if and not a matter of when. I really think we will merge with machines before we create a synthetic intelligence capable of what you describe. And once we merge with machines there really is no need to develop AI at that point.
 
I imagine there will be at least some people like me who'd much rather remain human. Moreover, we can't bet that we'll somehow "merge" before really advanced AI is developed. We're decades or more from any singularity, thankfully, but our AI technology has existed for decades and constantly improves.
 
From what I understand, most ai research involves machine learning and, basically, a computer program that can rewrite its own code.



Think about Watson, arguably the most powerful AI currently. It was originally tasked with learning how to win at Jeopardy. Then they repurposed it to assist in medical diagnoses. It's teaching itself, learning from the vast depths of medical literature, patient outcome data, etc. It wouldn't surprise me in the least if they could add another functional area to his arsenal, instead of reworking him and losing the medical uses.

OnTheMedia, a radio program from WNYC.org, had a segment on AI this week (you can stream it from their website). A Dutch researcher, talking about computers being smarter than humans, Saud it's like saying that submarines swim better than people. They are two different things, even though in each case there are some superficial similarities.

Thing is, that's learning, but right now like in the example you gave Watson didn't exactly choose to learn medicine. Something else chose it for him. I think I'm trying to say that while AIs like Watson may be intelligent, they aren't really sentient yet. I'm wondering if we would ever really need or want a sentient AI.

Also, will your adversaries (e.g. other nations' militaries) be so circumspect? Or would they rather take some risks for the sake of faster development?

Well they screw it up at their own/everyone's cost. Though considering we're already using AIs to trade on the stock exchange, I think you're right in that others would probably try it haphazardly.

Is this supposed to be encouraging? Once an AI figures out that our goals differ from its goals, it may well shut the hell up about the subject and hope that we never figure out the divergence. Meanwhile, it builds up resources, like computing power and money. (Whoever mentioned Wall Street trading programs: you are definitely barking up the right tree.)

I think it was a joke.
 
I imagine there will be at least some people like me who'd much rather remain human.

That's a pretty narrow definition of what makes us human. I believe it is our consciousness that makes us human and makes us who we are, not whatever physical form contains that consciousness.

And if people don't want to merge with machines that's on them, but then they will probably end up going the way of the Neanderthals when the Cro-Magnons showed up.
 
That's a pretty narrow definition of what makes us human. I believe it is our consciousness that makes us human and makes us who we are, not whatever physical form contains that consciousness.

And if people don't want to merge with machines that's on them, but then they will probably end up going the way of the Neanderthals when the Cro-Magnons showed up.

Interbreeding with them? :mischief:

I maintain that humans are fundamentally animals with fundamentally animal physical/social/mental/emotional requirements, and that "upgrading" to a machine defeats the purpose of living and removes much of your humanity. This "upgrade" seems to entail committing suicide and having a very close digital replica of your mind constructed. It may "think" like you, act like you (minus those apparently "inferior" animal urges and needs), it may even have your memories, but it's no more "you" than a twin, and a society that embraces singularity seems to be killing itself off so that computer programs can simulate happiness. I find that pointless and horrifying.

It's this whole "change in ways that aren't necessarily better or die" attitude that irks me about technophiles. Ultimately, the purpose of all human actions seems to be survival, happiness, or the avoidance of unhappiness. The human species can survive just fine without becoming machines or computer programs, and it can be happy as well. There is no "need" for the singularity. But too often, technology and other developments don't make people happier, so much as they make them better at killing and oppressing each other. The creation of the state may not have been a good move in ensuring that people led meaningful, enjoyable lives (what with their impersonal nature, oppressive power structures, and violence), but stateless societies rarely last long against a state's application of military, economic, and demographic pressure. Even without smallpox and other diseases annihilating them, the various stateless peoples of North America and Siberia would have had a very hard time resisting the sheer numbers of organized, wealthy, heavily armed professional European troops and settlers, regardless of whether or not those stateless societies were more pleasant to live in generally. Which is awful, and not something to be lauded.

This doesn't mean that humanity is doomed, necessarily. Hopefully, posthumans will be merely computer programs with little in the way of physical needs. They shouldn't compete much with humans for resources or land, and maybe they could exist on planets inhospitable and useless to humans such as Venus or Mercury. After all, they would exist mainly digitally; the physical world would matter little to posthumans, so long as whatever physical devices that power and house them remain intact and functioning. Me, I love the physical world and would never willingly part with it, but some posthuman program couldn't care less about whether it has a lot of land and a yacht and my country's natural resources so long as it can continue to operate and "enjoy" its simulated existence.

Just do humans a favor and leave us be. Quit life if you want to, but there's no need to eliminate or oppress humans if "you" are just a program.
 
Interbreeding with them? :mischief:

I maintain that humans are fundamentally animals with fundamentally animal physical/social/mental/emotional requirements, and that "upgrading" to a machine defeats the purpose of living and removes much of your humanity. This "upgrade" seems to entail committing suicide and having a very close digital replica of your mind constructed. It may "think" like you, act like you (minus those apparently "inferior" animal urges and needs), it may even have your memories, but it's no more "you" than a twin, and a society that embraces singularity seems to be killing itself off so that computer programs can simulate happiness. I find that pointless and horrifying.

It's this whole "change in ways that aren't necessarily better or die" attitude that irks me about technophiles. Ultimately, the purpose of all human actions seems to be survival, happiness, or the avoidance of unhappiness. The human species can survive just fine without becoming machines or computer programs, and it can be happy as well. There is no "need" for the singularity. But too often, technology and other developments don't make people happier, so much as they make them better at killing and oppressing each other. The creation of the state may not have been a good move in ensuring that people led meaningful, enjoyable lives (what with their impersonal nature, oppressive power structures, and violence), but stateless societies rarely last long against a state's application of military, economic, and demographic pressure. Even without smallpox and other diseases annihilating them, the various stateless peoples of North America and Siberia would have had a very hard time resisting the sheer numbers of organized, wealthy, heavily armed professional European troops and settlers, regardless of whether or not those stateless societies were more pleasant to live in generally. Which is awful, and not something to be lauded.

This doesn't mean that humanity is doomed, necessarily. Hopefully, posthumans will be merely computer programs with little in the way of physical needs. They shouldn't compete much with humans for resources or land, and maybe they could exist on planets inhospitable and useless to humans such as Venus or Mercury. After all, they would exist mainly digitally; the physical world would matter little to posthumans, so long as whatever physical devices that power and house them remain intact and functioning. Me, I love the physical world and would never willingly part with it, but some posthuman program couldn't care less about whether it has a lot of land and a yacht and my country's natural resources so long as it can continue to operate and "enjoy" its simulated existence.

Just do humans a favor and leave us be. Quit life if you want to, but there's no need to eliminate or oppress humans if "you" are just a program.

The funny thing is when I made the comment about going the way of the Neanderthals I was thinking it would be the organic humans that would start the conflict with the synthetic humans. I say this because synthetic humans would probably still try to maintain some sort of interaction with the societies and economies of the organic humans. Now given that synthetic humans would likely be able to do calculations as fast as a computer and work tirelessly and precisely like a robot (all without any requirement for wages or healthcare) they would probably come to dominate those economies whether they intend to or not. Cheap goods manufactured by the synthetic humans would lead to a concentration of most manufacturing into the synthetic humans' "nation" and would likely drive organic run companies out of business. Once this happens, I think the organic humans will start to fear they are slowly being "conquered" by the synthetic humans and will start to lash out against them. What would start as protectionist laws would probably escalate to trade embargos against synthetic run companies and eventually in organic humans attempting to take military action against the synthetic humans. Once that happens, I don't see that type of war going well for organic humans.

Of course I could be completely wrong and rational voices might prevail; allowing organic humans and synthetic humans to peacefully coexist.
 
Or why assume machinehood would treat us any differently if it becomes the intellectually superior "being"? Any machine intelligence will at its root have human influence by nature of us being its creators. So even if its radically different, at its very basal level it would be influenced by us and easily could take on our less than positive qualities.
I don't think this is necessarily the case at all. The whole thing about AI that distinguishes it from other sorts of computer programs is that it learns and evolves. It's dynamic - the code that people write on day 1 isn't the same as the code that's running on day 100. There's no reason at all to think that the machine would adopt our bad traits.

The thing about Watson is that they way how it manages the data it is given is totally different from how a human, basically it just processes things extremely fast, but it can never react to situations like a human can. I just don't see in the future how any machine could become "self aware" like we humans are.
Just because you can't imagine it doesn't mean it's impossible. I think the odds are good that a silicon brain will become self aware at some point. Commodore said it's a matter of If, not When. I think it's a matter of When. As you know, I'm a pretty hard-core materialist (I think that's the term), and I don't think there's anything special or unique to our brain that would limit self-awareness to only our specific configuration of brain matter.

Thing is, that's learning, but right now like in the example you gave Watson didn't exactly choose to learn medicine. Something else chose it for him. I think I'm trying to say that while AIs like Watson may be intelligent, they aren't really sentient yet. I'm wondering if we would ever really need or want a sentient AI.
I agree that they aren't sentient. But is sentience necessary for AI? I don't think so. I think you can have sentience without the "intelligence" part, and you can have AI without sentience.
 
Just because you can't imagine it doesn't mean it's impossible. I think the odds are good that a silicon brain will become self aware at some point. Commodore said it's a matter of If, not When. I think it's a matter of When. As you know, I'm a pretty hard-core materialist (I think that's the term), and I don't think there's anything special or unique to our brain that would limit self-awareness to only our specific configuration of brain matter.

Just for clarification: I only think AI is an "if" because I feel we will integrate with machines before we develop a sapient AI. Now of course I could very well turn out to be wrong in that assumption. That is where the "if" factor comes in. To me it all depends on what comes first: sapient AI or synthetic humans. But I in no way think the development of a sapient AI is impossible.

Otherwise, I completely agree with you that a self aware machine is a very real possibility and that there is nothing inherently special about the human brain.
 
Ahh, I missed that distinction.

That raises an interesting question. Assuming you "upload" to a computer, would you be considered an AI?
 
Here's another little nugget to bake your noodle on: If you believe we were created by a deity, then are we not artificial intelligence? I mean, if we were created then that means we are not naturally occurring and are thus artificial. So if you go by that logic then sapient AI already exists and all you have to do is look in a mirror to find it.
 
Now that's the left for you. Confuse the issue in a circular jerk off over what the definitions are.
 
Hmm, I tend to think that accurate and precise definitions are the bedrock of a discussion. If we're all using the same word but in different ways then there's going to be nothing but confusion. Tower of Babel, and all that.
 
Here's another little nugget to bake your noodle on: If you believe we were created by a deity, then are we not artificial intelligence? I mean, if we were created then that means we are not naturally occurring and are thus artificial. So if you go by that logic then sapient AI already exists and all you have to do is look in a mirror to find it.

If you believe that we are created by a deity, then 'natural' means 'created by a deity' and 'artificial' means 'created by something that is not a deity' or 'created by humans' depending on whether you consider molehills to be natural.
 
Back
Top Bottom