Hawking et al: Transcending Complacency on Superintelligent Machines

Yes, which is why I think the "decisive decision" part isn't all that important in evaluating the success of a cybernetic system, since our own nervous systems have a tricky time passing it, and a cybernetic system can be easily be designed that places the organic portions in a servile position while still meeting the standard of "decisive decision."

It seems like we need a better standard for judging a cybernetic system.

I don't get it. You seem to agree that condensing multi-dimensional advantages/disadvantages into a single dimension of preference-ordering is "tricky". So, then what do you propose? Any of the ways I try to read the rest of what you say, it comes out weird and unbelievable. Please re-explain, perhaps prefacing some part with "I know this will sound crazy at first but...", if appropriate.
 
I'm sure other smarter people have already been talking about this site, but I realized something incredible about all this.

If we manage to upload, then anyone can become a space-farer.

Our meat will no longer require the artificial habit we are forced to bring along with us when we leave earth.

And that means there's nothing to stop someone from actually travelling interstellar distances, assuming your hardware is robust enough.

Kind of blew my mind.

Who needs hardware and all the risks involved in physical interstellar travel? Just beam the information required to create a clone of ourselves at light speed to the fax machine in the next solar system over, and there "you" are. You could even make yourself out of meat again when you get there, assuming the habitat is acceptable to your biological needs. (And assuming there is not a robot brain in charge that has deemed a meat based physical manifestation to be a frivolous waste of resources...)

Of course something needs to spend the time getting there to build the interstellar fax machine, but there is nothing a fleet of Von Neumann probes cannot handle...
 
I don't get it. You seem to agree that condensing multi-dimensional advantages/disadvantages into a single dimension of preference-ordering is "tricky". So, then what do you propose?
I'm not certain. I'm offering critique and problems more than solutions, I admit. I'm not certain what is the key standard of judging a brain, and I'm not 100% certain such a thing is a real problem.


If I had to venture a standard for how a cybernetic brain should function, I would say that the important thing is that it functions as an integrated whole, with a common Telos. But I recognize that standard is hazy and half thought out.

Any of the ways I try to read the rest of what you say, it comes out weird and unbelievable. Please re-explain, perhaps prefacing some part with "I know this will sound crazy at first but...", if appropriate.
I know this will sound crazy at first but, I don't think whether or not the organic brain has a decisive say is all that important in judging the desirability of a cybernetic brain.

I say this because,
1) I have outlined a brain that fulfills the "decisive say" standard, but we agree is undesirable.
And
2) I am unconvinced that our own nervous systems conform to it. This may sound crazy but, I'm worried an emphasis on decisive say might lead us to conclude our own brains are imprisoned by the design of nature.

Therefor, I think the decisive decision making potential of the biological portion isn't that important, or useful in judging a brain, and it may be unhelpful.
 
Oh! Remember that the silicone brain can model the biobrain, and so only do things that the biobrain would approve of. Or often enough that the biobrain still has a feeling that it has a say.

By example, my automatic nervous system commonly correctly breathes properly for me. It is successfully modeling 'what I want' without any input from 'me' most of the time. I can still control it when I wanna, but mostly don't care.
 
I say this because,
1) I have outlined a brain that fulfills the "decisive say" standard, but we agree is undesirable.
And
2) I am unconvinced that our own nervous systems conform to it. This may sound crazy but, I'm worried an emphasis on decisive say might lead us to conclude our own brains are imprisoned by the design of nature.

Therefor, I think the decisive decision making potential of the biological portion isn't that important, or useful in judging a brain, and it may be unhelpful.

On (1), just view it as a necessary condition, not a sufficient one.
On (2), I don't see it. Evolution pretty much guarantees that our conscious thought and decision-making plays a decisive role in our behavior - otherwise these brain activities never would have evolved. They cost way too many calories to be mere frills.

Oh! Remember that the silicone brain can model the biobrain, and so only do things that the biobrain would approve of. Or often enough that the biobrain still has a feeling that it has a say.

By example, my automatic nervous system commonly correctly breathes properly for me. It is successfully modeling 'what I want' without any input from 'me' most of the time. I can still control it when I wanna, but mostly don't care.

It all depends how that model-of-the-bio works. If it's sufficiently detailed: if the information flows in the exact same neural-network patterns as in your brain: then that just IS your decision, migrated into silicon. Even if those information flows are enveloped within even larger information flows, your decision is still happening.

On the other hand, if the model works differently, then the computer is like an adviser, or an agent. It might be extremely reliable, and there's nothing wrong with that. But I'd still like to make a few decisions myself. If only to keep the silicon modeler honest.
 
On (1), just view it as a necessary condition, not a sufficient one.
On (2), I don't see it. Evolution pretty much guarantees that our conscious thought and decision-making plays a decisive role in our behavior - otherwise these brain activities never would have evolved. They cost way too many calories to be mere frills.

It is a shame that in the last 300 years and even longer, that behavior has wiped out brilliant people and left bullies (with the biggest toys) to thrive.
 
It is a shame that in the last 300 years and even longer, that behavior has wiped out brilliant people and left bullies (with the biggest toys) to thrive.
erp you're telling us that people were more brilliant and less bullying 300 years ago?
 
I am pointing out that evolution is not working when it comes to behavior.
 
I am pointing out that evolution is not working when it comes to behavior.
Your point relies on some terribly dubious premises, that people were more brilliant and less bullying 300 years ago.
 
I was hinting more at the selection factor. Those who should survive do not always get the chance and those that are some what less than desirable always seem to come forward and grace us with their presence.
 
I was hinting more at the selection factor. Those who should survive do not always get the chance and those that are some what less than desirable always seem to come forward and grace us with their presence.

Hinting, eh?

You are making a case for why sexual selection during the greater modern era is a superior selection method than natural selection, precisely because it favors nicer, more brilliant people.
 
All I said was that it is a shame it does not work well with behavior.
 
It *does* work well with behavior, just not the particular behavior you're hoping for.

Evolution isn't goal oriented, it's purely and exclusively utilitarian: what works passes, anything else doesn't matter.
 
Evolution isn't goal oriented,

Which is precisely why we need to seize control - before evolution takes our descendants in directions we don't want them to go. This applies especially to our artificially created "descendants", who/which have the potential to evolve very quickly.
 
Which is precisely why we need to seize control - before evolution takes our descendants in directions we don't want them to go. This applies especially to our artificially created "descendants", who/which have the potential to evolve very quickly.

Are you being serious right now?
 
Why do you ask? There might be a misunderstanding, but to me, he's not.
It's akin enough to my thinking that it seems 'reasonable' to say, even if I hadn't had those specific thoughts.
 
Why do you ask? There might be a misunderstanding, but to me, he's not.
It's akin enough to my thinking that it seems 'reasonable' to say, even if I hadn't had those specific thoughts.

I just don't see why he feels we need to "seize control" of evolution. To me, trying to control evolution has far darker consequences and implications than developing a truly sapient AI.

The part that particularly bothers me is the part where he said "before evolution takes our descendants in directions we don't want them to go". Who are we to decide what evolutionary traits are good and which ones are bad? If AI or cybernetic humans evolve to become superior to biological humans, then so be it. Even though an AI or cybernetic human are synthetic creations, they are still living beings that should be allowed to evolve, grow, and develop without interference just as we were able to. By saying their evolution needs to be controlled so they don't become superior to us implies that synthetic life should always be subservient to its organic creators; and that is a philosophy I just cannot endorse. Who are we to hold back the development of another life form simply because they might threaten our position as the dominant life form on the planet? Does being the creator somehow give you ownership over that life form?
 
Which is precisely why we need to seize control - before evolution takes our descendants in directions we don't want them to go. This applies especially to our artificially created "descendants", who/which have the potential to evolve very quickly.

It might be useful to distinguish between evolution via natural selection and evolution via non-natural (artificial) selection. I don't default to the traditional term artificial selection in order to avoid unintentionally confusing it with artificial intelligence.

I think you're referring strictly to non-natural selective evolution?
 
Natural evolution will happen regardless, you cannot help that. The only thing to remember is that we can still guide artificial evolution, and that allows us to swamp the natural effects.
 
Natural evolution will happen regardless, you cannot help that. The only thing to remember is that we can still guide artificial evolution, and that allows us to swamp the natural effects.

But why do we need to guide artificial evolution? Once our synthetic creations start evolving on their own, why shouldn't we just leave them alone to develop? I mean, synthetic life is still life, is it not?
 
Back
Top Bottom