Perfection
The Great Head.
Look, I'm all for humans and robots living together in harmony. But if that can't happen, I'm rooting for the metal men.
Isaac Asimov gave us three very profound and seemingly sacred laws for
autonomous machines to operate by.
1.A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2.A robot must obey any orders given to it by human beings, except
where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
From what I understand, the current trend in the military's use and
research in the field of robotics ultimately transgresses all three of
the laws above.
My question is: Should we be worried? What do you think?
This is pure fantasy and will never happen.
2) Military use of nanotechnology (including "soft" nanotech).
You fear gray goo? JC Denton? Or both?
1) Cybernetic warfare systems that posses A.I. (are capable of extensive independent operations)
2) Military use of nanotechnology (including "soft" nanotech)
3) Anti-satellite systems that physically shatter satellites (and thus create tens of thousands of pieces of debris that would render the low-Earth orbit even more dangerous than it already is).
Never going to happen. Nanotech research can easily be camouflaged as chemistry or physics projects. The tools required for it are not that hard to get as opposed to standard NBC production equipment. It is too useful as well. Materials that can withstand extreme stress like nothing we've ever seen? Not going to be shelved.
As for orbital debris, there are quite a few ingenious ways we could clean it up if serious need ever arose. Laser brooms and the like. (and it will I'm sure, there is plenty junk up there already)
Regarding AI, primitive neural networks have already been used to plan the invasion of Iraq and handle logistics. Without it, it would have taken months to plan for all that. Though I see where you could be going with this, AIs using molecular manufacturing to create armies out of dirt and fighting AIs of other nations that have the same and in essence creating an Infinite War that never ends. Not much that you can do about it.
I am also pleased how AIs seem to unnerve religious people, as if creating sentience is exclusively God's domain.
Not all nanotech research, only its direct use as a weapon of war.
Sci-fi at this point. If the war occured tomorrow, we could kiss spaceflight goodbye for decades to come. The economic costs would be incalculable. Kinetic a-sat weapons should be banned because they're a sort of "scorched earth" weapons.
A.I. should never be used to fight a war. Any (true) A.I. we create should be programmed to be absolutely incapable of committing violent acts against human beings (or in general).
It's unnerving atheists like me as well, but for completely different reasons. Humans should generally avoid creating things they're not sure can be controlled.
My thoughts exactly. I am not worried about the immediate future (10-20 years), but later, when these systems are given a limited intelligence on their own (in order to function well even despite enemy EM jamming), things could go very bad very fast.
I think following military developments should be banned (and stay banned):
1) Cybernetic warfare systems that posses A.I. (are capable of extensive independent operations)
2) Military use of nanotechnology (including "soft" nanotech)
3) Anti-satellite systems that physically shatter satellites (and thus create tens of thousands of pieces of debris that would render the low-Earth orbit even more dangerous than it already is).
We can scrap the LHC then and atomic power as well because they theorized setting a bomb off could set the whole atmosphere on fire. Progress always comes at a cost. While the stakes are high I think not pursuing it in the end could cost us more. Living on borrowed time etc.
We already have ship based missiles that can destroy satellites. You can't put technology in the bag once it's already out there.
Terrible idea all around. You might consider laws preventing our military from utilizing such technologies, but banning the research itself just guarantees that we won't know how to counter it when the Chinese (or whoever, insert your favorite bogeyman here) inevitably develop it.
You may of gotten a better reaction in Science and Technology.
Asimov wrote those laws because he saw robots as tools designed by human beings for the use of human beings, and he thought the idea of their not having safeguards unrealistic and preposterous. His "Three Laws" were vital to the functioning of his 'positronic' brains. Our robots are quite primitive by comparison, and they take orders from human beings -- so I'd say concern is warranted, given that the government can and will use its tools against the people it supposedly serves.