Machines taking over in 2045?

Tasks (or simulated tasks for simulated "organisms"), much as psychologists pose to test animal intelligence. Bear in mind, the portion of evolutionary history required to get to a basic mammalian brain was much larger than that required to get from there to human.



The evolutionary-algorithm approach to mind design requires simulating the environment, not just the organisms. So naturally, the "brain" doing those calculations is going to have to be huge.
So the low hanging fruit is to model the ecosystem in which humanity developed? Firstly we're not even sure what exact environment is most responsible for human intelligence. Secondly it's not a sure bet that human-like intelligence will come out a second time. Thirdly, moore's law may run out long before we have enough resources to simulate the whole jungle; The brain, sure, we're on the cusp, depending on how you count, but the jungle, we're not even close.

Let's pray that we are not so stupid as to create an AI that we can't shut down...
All it takes is one idiot, and the world's not sort on those.:nuke:
 
But idiots don't create super-AI programs.
 
Sure they do. Smart people are idiots as often as anyone.

Being able to write a really smart program and forgetting to add an off switch are not mutually exclusive. Nor that and is being crazy enough to think maybe the AI should take over the world.
 
There is a nice article by Paul Allen (cofounder of Microsoft) why the singularity isn't near:
http://www.technologyreview.com/blog/guest/27206/

Paul Allen is a pretty important guy, so I'm pretty sure he's got a well-formed opinion on the topic He's certainly a 'true believer' in progress, so he's not against the idea itself.

It's funny how it looks like an argument from incredulity, but it's the only criticism against Kurzweil, whose position is that 'previous progress predicts future progress'. Incredulity, or disproofs, is all you have.

I've looked through Sandberg's white paper (and listened to the linked talk, plus his talk at GoogleTechTalks). It kinda looks like emulation should be possible by the 2060s (assuming similar growth in scanning and computer technologies). And, like Mr. So points out, brain emulation isn't the most likely advanced AI. But, emulation doesn't suffer from the 'complexity trap', because the ability to emulate is not much advanced from where we are now, and could be done with merely more resources (because the scanning and computer would be very, very expensive with today's technology).
 
Speaking of idiots, I think a human-like AI is one that is able to be stupid just like humans are. :mischief:
 
So the low hanging fruit is to model the ecosystem in which humanity developed? Firstly we're not even sure what exact environment is most responsible for human intelligence. Secondly it's not a sure bet that human-like intelligence will come out a second time. Thirdly, moore's law may run out long before we have enough resources to simulate the whole jungle; The brain, sure, we're on the cusp, depending on how you count, but the jungle, we're not even close.

All it takes is one idiot, and the world's not sort on those.:nuke:

An example (kinda old):

http://www.youtube.com/watch?v=oCXzcPNsqGA&feature=related

In order to evolve they also need some intention/objective, like survive and replicate.
 
An example (kinda old):

http://www.youtube.com/watch?v=oCXzcPNsqGA&feature=related

In order to evolve they also need some intention/objective, like survive and replicate.

Nice! That's the kind of evolutionary algorithm I'm talking about, all right. Of course, it would be nice to create more detailed environments and creatures, but that will come with computing power.

BTW, Moore's law might peter out - but so could the trends in scanning technology that brain emulators are hoping to count on.

Souron said:
Smart people are idiots as often as anyone.

And I'm out to prove it!
:blush:
 
If by artificial evolution some A.I. is developed in a virtualized enviroment, I guess it would not be harmful, unless they could get out of that virtual env (But I don't see how it is possible).
 
If by artificial evolution some A.I. is developed in a virtualized enviroment, I guess it would not be harmful, unless they could get out of that virtual env (But I don't see how it is possible).

Simple: talk its way out of the box. It just has to convince a human that it would be good or right to let the AI have more freedom.
 
Simple: talk its way out of the box. It just has to convince a human that it would be good or right to let the AI have more freedom.

But then it would only get into another, larger, even more simulated box and we would see what it would do :D
 
But then it would only get into another, larger, even more simulated box and we would see what it would do :D
The Thirteenth Floor ftw.

About those IBM’s neurosynaptic chips, do you remeber 'hal' from "2001 a space odyssey"?. I don't know but I've been told that Kubrick means "HAL" = "IBM". The letters of HAL's name come before the letters of IBM in the alphabet. If that so, Kubrick should be included in Civ games as a great prophet.
 
Sorry for the bump. This thread really piqued my interest, and so I've been doing some research over the last couple of weeks.

I really think we're on track. All of the necessary trends seem to be robust. The improvement in computing hardware and software is continuing strongly, and seem to have strong economic incentive to continue for a long time. I think that predicting logarithmic trends in computing and information technology is quite reasonable.

I found out that Sandberg is a transhumanist, which makes his whitepaper on human brain scanning suspect. But I looked at other players in the fields, and it seems like perfect whole-brain emulation will be possible, at the lastest, in the 60s. That's emulation only, i.e., high-detail scanning of a brain and then programming simulations of every process. The 60s are pretty far out. It's amazing how even logarithms are not as powerful as we'd like when it comes to 'scanning the brain and storing the information'. There's a lot of data! But that's just emulation. AIs, in general, aren't only going to be brain emulators, and so there's general evolutionary pressure.

As an aside the Singularity Institute has released their 2011 seminars onto youtube. Word of warning, some of the talks are awful. Some are really good. My rule of thumb is, the talks from professors are good. The talks that sound like fanbois suck (and aren't worth watching).
 
One thing to note about Moore's Law is that it doesn't take into account the development of quantum computing.

http://www.wsws.org/articles/2011/jan2011/quan-j08.shtml

Taken from the above:
"A bit has a distinct disadvantage compared to a qubit. While 1000 bits could deliver about 1000 pieces of information at a time, 1000 qubits could deliver approximately 2^1000 (or 10^300) pieces of information simultaneously. This number is so large, that it is incomprehensibly larger than the number of grains of rice it would take to fill up the Solar System."

A functioning quantum computer might be far off but if it ever were developed the jump in computational power would be vast.
 
Quantum computers will be a leap in the computer's capability to do certain computations, but there is no reason to think that it will be more than a one time leap. The details depend on the specific quantum computer technology, but you can't generally keep decreasing the size of a quantum gate, like you can a transistor. There won't be a Moore's Law of quantum computing.
 
That's true, there are a few dark horses stalking in the trendlines. Winner, luiz, and I discussed how a stabilizing of the educated population might affect things. Quantum computing would probably upset the trend lines for the world's computing power. Another potential is a revolution in education (it seems unlikely, but it is possible), where it actually becomes easier to teach concepts due to a new way of teaching. This would functionally boost the IQ of the planet, allowing another 'kick' in our acceleration trends. If we find ways of making more intelligent (instead of just more capable with new tools), we'd gain progress even faster.

If we could get calculus into the average teen, for example. Or if these 'brain games' actually caused rewiring that benefited intelligence, we'd beat the trend lines again.
 
That's true, there are a few dark horses stalking in the trendlines. Winner, luiz, and I discussed how a stabilizing of the educated population might affect things. Quantum computing would probably upset the trend lines for the world's computing power. Another potential is a revolution in education (it seems unlikely, but it is possible), where it actually becomes easier to teach concepts due to a new way of teaching. This would functionally boost the IQ of the planet, allowing another 'kick' in our acceleration trends. If we find ways of making more intelligent (instead of just more capable with new tools), we'd gain progress even faster.

If we could get calculus into the average teen, for example. Or if these 'brain games' actually caused rewiring that benefited intelligence, we'd beat the trend lines again.

I'd say it's nigh inevitable.
 
Quantum computers will be a leap in the computer's capability to do certain computations, but there is no reason to think that it will be more than a one time leap. The details depend on the specific quantum computer technology, but you can't generally keep decreasing the size of a quantum gate, like you can a transistor. There won't be a Moore's Law of quantum computing.

That's a good point (it was an xpost for my previous one). Currently, total computer power is accelerating for two reasons: computing is becoming cheaper per dollar and per watt, and so more computing can be done for less. As well, more and more people are purchasing computing, and so there's a spread of computing power through the populations.

If quantum computing cannot be reasonably 'shrunk' too much, then it will hit a wall on how much it can accelerate our progress. However, the amount of quantum computing purchased can continue to spread in the population (though this would be a sigmoidal curve), so quantum computing will give a temporary kick (or, as you say, a one-time leap). That said, it's still something that will bring the advent of whatever computing power is 'necessary' for a Singularity closer, more rapidly.

I'd say it's nigh inevitable.

Yeah, in a way, you're correct. People will continue to think about this problem, and eventually there will be a viable innovation. The value of this innovation cannot be overstated. Being able to teach in a faster or more graspable manner would have amazing amounts of leverage. I guess it's inevitable, but I just don't see it on the horizon.
 
Back
Top Bottom