Should we be worried?

Isaac Asimov gave us three very profound and seemingly sacred laws for
autonomous machines to operate by.

1.A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2.A robot must obey any orders given to it by human beings, except
where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.


From what I understand, the current trend in the military's use and
research in the field of robotics ultimately transgresses all three of
the laws above
.


My question is: Should we be worried? What do you think?

My thoughts exactly. I am not worried about the immediate future (10-20 years), but later, when these systems are given a limited intelligence on their own (in order to function well even despite enemy EM jamming), things could go very bad very fast.

I think following military developments should be banned (and stay banned):

1) Cybernetic warfare systems that posses A.I. (are capable of extensive independent operations)
2) Military use of nanotechnology (including "soft" nanotech)
3) Anti-satellite systems that physically shatter satellites (and thus create tens of thousands of pieces of debris that would render the low-Earth orbit even more dangerous than it already is).
 
Part of the problem with using Asimov is that he created those laws so that they would break in interesting ways. They're not good 'laws' for robots to have.

Military hardware has been killing autonomously for quite awhile. The best example I can think of is the landmine. Yeah, we should be worried about autonomous AI. In fact, what we most need is AI that can help stay the hand of the soldier, if the soldier makes a mistake
 
This is pure fantasy and will never happen.

What will never happen? A.I.? Autonomous robotic warfare systems?

Because a lot of people are for some reason trying to build the former and the latter is already being tested on the battlefields.

We need some sort of international understanding of the sort that exist in other areas. All major powers have agreed that using biological weapons is too dangerous and too unpredictable, so they have refrained from it (even though the USSR was having a secret bioweapons programme).

Right now, I am most concerned about the a-sat arms race. Even a limited war between China and the US would probably clog the low-Earth orbit with debris from dozens of destroyed spy satellites, and we can't afford that. In my opinion, a-sat weapons that physically destroy the satellites need to be banned. The chance that it will happen is slim though, because the military imbeciles on both sides probably don't realize the dangers and only see the benefits.
 
Yeah, nanotech is basically within the same theme as biological warfare.
Of course, computing is a type of nanotech, but that's not really what's meant.
 
1) Cybernetic warfare systems that posses A.I. (are capable of extensive independent operations)
2) Military use of nanotechnology (including "soft" nanotech)
3) Anti-satellite systems that physically shatter satellites (and thus create tens of thousands of pieces of debris that would render the low-Earth orbit even more dangerous than it already is).

Never going to happen. Nanotech research can easily be camouflaged as chemistry or physics projects. The tools required for it are not that hard to get as opposed to standard NBC production equipment. It is too useful as well. Materials that can withstand extreme stress like nothing we've ever seen? Not going to be shelved.

As for orbital debris, there are quite a few ingenious ways we could clean it up if serious need ever arose. Laser brooms and the like. (and it will I'm sure, there is plenty junk up there already)

Regarding AI, primitive neural networks have already been used to plan the invasion of Iraq and handle logistics. Without it, it would have taken months to plan for all that. Though I see where you could be going with this, AIs using molecular manufacturing to create armies out of dirt and fighting AIs of other nations that have the same and in essence creating an Infinite War that never ends. Not much that you can do about it.

I am also pleased how AIs seem to unnerve religious people, as if creating sentience is exclusively God's domain.
 
Asimov wrote those laws because he saw robots as tools designed by human beings for the use of human beings, and he thought the idea of their not having safeguards unrealistic and preposterous. His "Three Laws" were vital to the functioning of his 'positronic' brains. Our robots are quite primitive by comparison, and they take orders from human beings -- so I'd say concern is warranted, given that the government can and will use its tools against the people it supposedly serves.
 
Never going to happen. Nanotech research can easily be camouflaged as chemistry or physics projects. The tools required for it are not that hard to get as opposed to standard NBC production equipment. It is too useful as well. Materials that can withstand extreme stress like nothing we've ever seen? Not going to be shelved.

Not all nanotech research, only its direct use as a weapon of war.

As for orbital debris, there are quite a few ingenious ways we could clean it up if serious need ever arose. Laser brooms and the like. (and it will I'm sure, there is plenty junk up there already)

Sci-fi at this point. If the war occured tomorrow, we could kiss spaceflight goodbye for decades to come. The economic costs would be incalculable. Kinetic a-sat weapons should be banned because they're a sort of "scorched earth" weapons.

Regarding AI, primitive neural networks have already been used to plan the invasion of Iraq and handle logistics. Without it, it would have taken months to plan for all that. Though I see where you could be going with this, AIs using molecular manufacturing to create armies out of dirt and fighting AIs of other nations that have the same and in essence creating an Infinite War that never ends. Not much that you can do about it.

A.I. should never be used to fight a war. Any (true) A.I. we create should be programmed to be absolutely incapable of committing violent acts against human beings (or in general).

I am also pleased how AIs seem to unnerve religious people, as if creating sentience is exclusively God's domain.

It's unnerving atheists like me as well, but for completely different reasons. Humans should generally avoid creating things they're not sure can be controlled.
 
Not all nanotech research, only its direct use as a weapon of war.

Even so, it would be impossible to monitor and control unless you slapped a nano-camera on everyone on the planet with a degree in chemistry and physics.

Sci-fi at this point. If the war occured tomorrow, we could kiss spaceflight goodbye for decades to come. The economic costs would be incalculable. Kinetic a-sat weapons should be banned because they're a sort of "scorched earth" weapons.

They are already in the works actually. While the cost might be a little high in the short-term it is not something that would ground us forever.

A.I. should never be used to fight a war. Any (true) A.I. we create should be programmed to be absolutely incapable of committing violent acts against human beings (or in general).

Hard sell for the families of soldiers. Not to mention officers. I'm more concerned about such AI used for surveillance and sifting through large quantities of personal data than being used to wage war.

It's unnerving atheists like me as well, but for completely different reasons. Humans should generally avoid creating things they're not sure can be controlled.

We can scrap the LHC then and atomic power as well because they theorized setting a bomb off could set the whole atmosphere on fire. Progress always comes at a cost. While the stakes are high I think not pursuing it in the end could cost us more. Living on borrowed time etc.
 
My thoughts exactly. I am not worried about the immediate future (10-20 years), but later, when these systems are given a limited intelligence on their own (in order to function well even despite enemy EM jamming), things could go very bad very fast.

I think following military developments should be banned (and stay banned):

1) Cybernetic warfare systems that posses A.I. (are capable of extensive independent operations)
2) Military use of nanotechnology (including "soft" nanotech)
3) Anti-satellite systems that physically shatter satellites (and thus create tens of thousands of pieces of debris that would render the low-Earth orbit even more dangerous than it already is).

Terrible idea all around. You might consider laws preventing our military from utilizing such technologies, but banning the research itself just guarantees that we won't know how to counter it when the Chinese (or whoever, insert your favorite bogeyman here) inevitably develop it.
 
We already have ship based missiles that can destroy satellites. You can't put technology in the bag once it's already out there.

I think instead of 3 rules for robots, we need more rules for humans to follow. Something like... The ten commandments! (but more updated for modern times)
 
I'm not worried. It's all too far into the future to be really worried about, and it's still borderline speculation IMO.

And is today's xkcd comic relevent?

8HxCl.png


I don't think Asimov or Turing should be worried at this point.

I might be too young to remember, but this kind of worry goes on for most cutting edge technology that might be used for destruction, like atomic power and biological research in germs and viruses.
 
We can scrap the LHC then and atomic power as well because they theorized setting a bomb off could set the whole atmosphere on fire. Progress always comes at a cost. While the stakes are high I think not pursuing it in the end could cost us more. Living on borrowed time etc.

It's a totally different thing. There has never been any serious danger coming from LHC (correction, none whatsoever). On the other hand, the potential damage caused by a rogue unshackled A.I. could be enormous, especially if it was given control of real military assets.

We already have ship based missiles that can destroy satellites. You can't put technology in the bag once it's already out there.

Yes, we can (TM). Intermediate range ballistic missiles were banned in an agreement between NATO and the Warsaw Pact, and deployment of anti-ballistic missile systems was also regulated.

Terrible idea all around. You might consider laws preventing our military from utilizing such technologies, but banning the research itself just guarantees that we won't know how to counter it when the Chinese (or whoever, insert your favorite bogeyman here) inevitably develop it.

Did you even read what I wrote? I am not saying we should ban research. I am saying we should ban its military applications. There is a huge difference there. We do a lot of biotech research, but it is illegal to, say, develop lethal Ebola strains for use as a biological weapon.
 
Couldn't you script a drone for example to fly around and open fire on anyone that fits a certain description? Like for example, kill everyone who is identified wearing a red shirt?
Or is that not Robot enough?
I'd say it is getting closer and closer... hehe
 
Asimov wrote those laws because he saw robots as tools designed by human beings for the use of human beings, and he thought the idea of their not having safeguards unrealistic and preposterous. His "Three Laws" were vital to the functioning of his 'positronic' brains. Our robots are quite primitive by comparison, and they take orders from human beings -- so I'd say concern is warranted, given that the government can and will use its tools against the people it supposedly serves.

Writing laws for robots and basing it on governmental institutions is insane. We wrote guidelines to keep the government in check and those have been violated. Any law for a robot is deemed to fail as well. Shoot, even the ten commands only lasted -1 day.

None of this worries me though, bring it on.
 
Back
Top Bottom