Hawking et al: Transcending Complacency on Superintelligent Machines

reach*

============================

i don't think 'scientific progress' is likely to ever halt due to actual lack of new study cause we would know most stuff already.

I did not suggest that scientific progress will halt "cause we would know most stuff".

Of course we will never know most stuff.

I suggested that scientific progress will halt when we will reach the limits of our intelligence.

i don't think 'scientific progress' is likely to ever halt due to actual lack of new study

There will be new study, but it will not produce any new results - due to limited capabilities of our brains.

That said, while our human ability to make new scientific breakthroughs seems infinite

Indeed, we dellude ourselfes that it might be infinite. Only because we have not reached its limits so far.

But for example we know that abilities of our bodies are limited, since we have already reached these limits.
 
Domen said:
I suggested that scientific progress will halt when we will reach the limits of our intelligence.

^We won't. Humans seem to be exactly that which can never reach its own limit, while both it and the limit still exist and function.
 
^We won't.Humans seem to be exactly that which can never reach its own limit,

What makes you delude yourself into thinking like this ???

We have learned that our bodies have only limited capabilities. We have reached the limits and we know them.

Why should it be different with our brains? Just because we have not reached the limits so far?

Believing that we can never reach our own limit, is like believing that God exists.
 
What makes you delude yourself into thinking like this ???

We have learned that our bodies have only limited capabilities. We have reached the limits and we know them.

Why should it be different with our brains? Just because we have not reached the limits so far?

Believing that we can never reach our own limit, is like believing that God exists.

Look at a parallel:

A boy climbing over a wall has reached the limit of the wall.
But the boy has not reached any limit in knowing what the wall is.

*

The human body does die, obviously. I don't know if we die utterly with it (maybe, like i said no one knows). But even if we do die with it this has nothing to do with the infinite complexity of the actual mental world of a human. That, surely, is infinite next to human consciousness, and anything infinite next to something specific, by definition will still be infinite when both itself and that specific point alter in ways related by functions which do not negate their infinity of altered situation, distance, power or any other term one wishes to use. Ie the human unconscious will never be examined in totality.
 
But the boy has not reached any limit in knowing what the wall is.

Not yet. But he will reach it at some point.

===================================

We know that our bodies have limited capabilities because our brains allow us to understand this.

We don't know yet that our brains also have limited capabilities, because when we think, we use only these limited brains. We know how many things science has discovered so far, but we don't know how many things were not discovered and will never be discovered due to limits of our mental capabilities.

My picture is pessimistic, I know.
 
Not yet. But he will reach it at some point.

===================================

We know that our bodies have limited capabilities because our brains allows us to understand this.

We don't know yet that our brains also have limited capabilities, because when we think, we use only these limited brains.

We know how many things science has discovered so far, but don't know how many things were never discovered due to limits of our capabilities.

(emphasis mine).

I agree with your sentence there. Only in my view those "limited abilities" are already something super-massive. It is just that the total to know is infinitely larger anyway. And the total to know regardless of human limits, is most likely hugely bigger than that too.
 
Then i maintain my position that i don't think this can be created by humans. Ever.

*

Sensing that something exists (does not have to be that the actual entity sensing it sensing itself exists) seems to me to be something we will never create, not even in a series of millenia.

My position is that if you accept that human intelligence and the other animal intelligences we see around us arose naturally then there is no reason to think that we cannot engineer an equivalent, given enough resources.

Your position, if I'm reading you correctly, is that humans arose naturally, but it's impossible for human to make something that may evolve into sentience.

That seems to me to be a logical error.
 
Maybe human intelligence is not sentient but just many times more complex than that of computers, while in fact no different in nature.
You might be using a different set of definitions. Of course human intelligence is sentient.
But probably our intelligence also has limits - and thus is not sentient. At some point, we will rich these limits. And scientific progress will be halted.
It very well might. We appear to be capable of iterative learning, i.e., I learn something and then teach you, and then you go learn something new. But, this might have natural limits. If it takes you 40 years to learn what I know, you then don't really have much time to build upon it.

We can work around this, but there might be a hard limit to what 'humanity' can know. But, we also might not have a hard limit. It's unclear right now. And, the trends are a bit confusing, because we're currently in the expansionary phase of adding population (so, the 'one in a million scientist' is increasing in number) and we have a host of nutritional, etc. reasons for why IQs are still rising to our 'natural' limit.

Once we learn how to make ourselves more intelligent, then it becomes unpredictable. Will we use that expanded intelligence to discover new ways of becoming intelligent?
 
My position is that if you accept that human intelligence and the other animal intelligences we see around us arose naturally then there is no reason to think that we cannot engineer an equivalent, given enough resources.

Your position, if I'm reading you correctly, is that humans arose naturally, but it's impossible for human to make something that may evolve into sentience.

That seems to me to be a logical error.

You are reading it correctly. But i do not share the view that nature and something sensing things present (eg a human, an animal, an insect, a fungus etc) are very much linked in a direct manner. Surely those organisms rose to exist in 'nature', but not in any way involving one of those being a creator. So why assume an organism which did not calculate its own creation or parameters allowing it to exist, will be later on in a position to artificially create something which would be living?

I am not claiming that humans are something unable or dumb. Far from it. It is my view that our species should have been vastly better by now. Maybe it will be in the future. But i see no reason to conclude that our development will allow for the creation of an actual AI.
Maybe that issue of P=NP and the likely negative answer, is related to this. From my little reading on it i noted that it is argued to be linked to AI creation prospects.
 
Yes you are still you and you are still human. It is your consciousness that makes you human and makes you who you are, not the physical components that carry that consciousness around.
If it's your consciousness that makes you a human, it's still clearly not you after you've uploaded this information into a computer.

Consciousness is an emergent property of your physical components.

If it wasn't, there'd be no need to upload this information to a computer to achieve immortality or whatever. Your brain gets splattered, oh well, consciousness is not the physical components that carry that consciousness around, right?

So all you're doing is creating at best a second stream of consciousness that operates on the same stream of information as you. But wait! If this is supposed to be an intelligent machine, that means it has to be able to learn, which means it has to be able to operate under different information. So now, you have a device that has different memories then you, a difference consciousness than you, and a different physical pattern then you. If that's what you want, I can sell you that very thing right now.

This whole business is based around some fuzzy thoughts about self and identity that basically amounts to a digital ka jar.
 
My position is that if you accept that human intelligence and the other animal intelligences we see around us arose naturally then there is no reason to think that we cannot engineer an equivalent, given enough resources.

Your position, if I'm reading you correctly, is that humans arose naturally, but it's impossible for human to make something that may evolve into sentience.

That seems to me to be a logical error.

If your point is that things happen naturally, they can never be engineered. We will just have to wait for the day machines naturally gain intelligence.
 
It has not happened yet, so I have not missed it.
 
Well. When Firaxis makes a Civ AI that can keep me on my toes I'll worry about this.
 
It very well might. We appear to be capable of iterative learning, i.e., I learn something and then teach you, and then you go learn something new. But, this might have natural limits. If it takes you 40 years to learn what I know, you then don't really have much time to build upon it.

That's where specialization comes into play: you don't teach one person everything you know, but you teach several people parts of what you know, so they can expand that part of your knowledge.

Even better: You write you knowledge down and put that somewhere in an archive, so it does not have to be actively known by anyone. People needing to build on a specific part of you knowledge can look up just that without having to know the rest. A field can stay dormant for decades, because nobody is interested/has the capacity to work on it, but with written records the knowledge can be quickly reacquired and then expanded upon.

there might be a limit to what humans can collectively know at any given time, but I don't think there is a limit (other than limits to storage space, which are extremely huge in theory) to human knowledge defined as what knowledge we can access if we need to.

Once we learn how to make ourselves more intelligent, then it becomes unpredictable. Will we use that expanded intelligence to discover new ways of becoming intelligent?

Or we can (and we already have) invent ways to facilitate the acquiring of knowledge. And then we use that to invent ways to make gaining knowledge even easier.
 
Yeah, in theory, we should become progressively more knowledgeable. It's just tough to figure out the trends right now. We're seeing an expansion of the number of people with 145 IQ, as more and more of the developing world joins the developed world's education system, etc.

Like, you say, there're other ways knowledge can keep on expanding, but it's nothing like adding millions of new geniuses. This is why Whole Brain Emulation will be so transformative, it'll vastly expand the number of clever intelligences available, a transformative shift that will make the 20th century seem slow
 
Back
Top Bottom