It's alive, and it's scary as hell.......

aneeshm

Deity
Joined
Aug 26, 2001
Messages
6,666
Location
Mountain View, California, USA
YouTube Link

The above shows a robot which learns from its environment by building models of itself, and testing them out.

It's also hellishly scary when you first look at it - it looks so absolutely alive, yet so utterly alien. You even feel pity for it when one of its limbs is damaged, and it tries to walk, because it's really reminiscent of a crippled animal.

Link 1
Link 2

The actual research
 
It's motors make some kinda insecty/animal-like high-pitched sounds too, which might contribute to the 'icky' factor :)

Still, very interesting stuff :goodjob:
 
Step on it! Hurry up before it gets to the children!
 
It's also hellishly scary when you first look at it - it looks so absolutely alive, yet so utterly alien. You even feel pity for it when one of its limbs is damaged, and it tries to walk, because it's really reminiscent of a crippled animal.
That was my exact reaction.
 
That is freakishly creepy.
 
That would make for an awesome pet.
 
YouTube Link

The above shows a robot which learns from its environment by building models of itself, and testing them out.

It's also hellishly scary when you first look at it - it looks so absolutely alive, yet so utterly alien. You even feel pity for it when one of its limbs is damaged, and it tries to walk, because it's really reminiscent of a crippled animal.

Link 1
Link 2

The actual research

Such research should be outlawed. We (humans) obviously suffer from some kind of a suicidal instinct, when we're trying to build autonomous, self-aware and what's worse, even self-replicating machines.

All we need is one Skynet, then we're gonna be replaced by a more advanced intelligence.
 
That's really interesting. We have robots who generate locomotion sequences themselves at my uni as well, but i haven't seen self modelling at this complexity before. Site bookmarked!
(The little robot is damn cute, too. Watch the baby struggle to walk... aww!)
 
Such research should be outlawed. We (humans) obviously suffer from some kind of a suicidal instinct, when we're trying to build autonomous, self-aware and what's worse, even self-replicating machines.

All we need is one Skynet, then we're gonna be replaced by a more advanced intelligence.

:undecide: I want to find sarcasm in this, but keep getting reminded that there are people who truly beleive it.



Awesome robot, up in the OP. Great to see that we're learning how to tell machines how to learn.
 
:undecide: I want to find sarcasm in this, but keep getting reminded that there are people who truly beleive it.

Yes, and there are people who, like always, can't see the danger hidden in their work even if it's standing right in front of them.

Our attempts to create a thinking machine is like giving an atomic bomb to a 5-year-old.
 
Such research should be outlawed. We (humans) obviously suffer from some kind of a suicidal instinct, when we're trying to build autonomous, self-aware and what's worse, even self-replicating machines.

All we need is one Skynet, then we're gonna be replaced by a more advanced intelligence.

I don't think it's possible or desirable to outlaw this.

As our understanding of neurobiology, neuroscience, consciousness, philosophy, computers, and mathematics increases, it will become easier and easier to simulate processes which are even more efficient (though probably much less robust) than the human brain at learning. I thing this is a good thing, because there are only two scenarios where there can be a serious problem to our dominance:

a) If a self-replicating robot is powerful enough to build itself up to the point where it can defeat all the armies of the world combined - which I don't think is possible (weapons research is not open to the public, and therefore not open to this hypothetical robot)

b) A program which spreads virally on the internet and uses the whole thing as a gigantic brain. Even in this case, simply shutting down the root servers will destroy its capacity. And for a program to actually do something like this, it must be able to mutate. If that happens, before it can reach any size large enough to control anything, it will have branched into mutually competing variants, thus posing no central threat.





And there is a trivially simple way of "keeping the drones in line" - program them to enjoy serving their human masters, if we assume for a minute that we are going to build an underclass of robot slaves.

If we make Asimov's Three Laws of Robotics the foundation, we should have nothing to worry about:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
Yes, and there are people who, like always, can't see the danger hidden in their work even if it's standing right in front of them.

Our attempts to create a thinking machine is like giving an atomic bomb to a 5-year-old.

Well, yes, but I'm trying to say that this is far too simple to be considered a "thinking machine".

If there was any chance that something a researcher was working on would result in the creation of a competitor to the human race for Earth-wide hegemony, I would be against it.

But this is absolutely nothing like that. Though I admit that the demo is VERY impressive - that's why I posted it, after all - the idea behind it is very simple, and nothing at all like a real "thinking machine". It's nothing more than a glorified control system.
 
This is nothing new or scary. The people who are scared know too little about computers and read too much science fiction.
 
Our attempts to create a thinking machine is like giving an atomic bomb to a 5-year-old.
Uh, yeah... Could you expand on that?

This is nothing new or scary. The people who are scared know too little about computers and read too much science fiction.
Yup. (Bad science) fiction, too. (it might be entertaining, but one shouldn't make any decisions based on it)
 
Our attempts to create a thinking machine is like giving an atomic bomb to a 5-year-old.

What a 5 year old gonna do with an Atomic Bomb. Kid probably couldn't even figure out how to arm it.


That thing is pretty sweet though. Robots are coming along nicely.
 
My girlfriends reaction was that it was cute. My reaction was that it was cool. I guess that about sums the thing up.

Winner said:
Yes, and there are people who, like always, can't see the danger hidden in their work even if it's standing right in front of them.

Our attempts to create a thinking machine is like giving an atomic bomb to a 5-year-old.

You've been watching far to much Sci-Fi. If you remember... these are the same people who predicted we'd be moving around in flying cars by the year 2000.
 
I'm wondering how well this AI, as it stands, could cope with encountering outside stimuli. It appears to only worry about figuring out what shape its body is, and finding a way to make that body walk on a flat surface. How would it handle running into a slope or obstacle?
 
It looks like a Dark Protoss Dragoon that's had a few too many.

*hic* fuuurrrrrr Aiurrrrr!!...*trips* *hic* You addresthth me!? *slip* No YOU shut up! Justhth wait until I figur out heruer to get *burp* over thurrrr....
Ummmm, **** I thinkfffth someone took myllleg! *falls off table*
 
Top Bottom