Torture vs Drones

You thought wrong.

But feel free tot try and post a scene with torture or what drones do to people.

Really, it fits with the sidebar discussion that has been going on regarding choosing who to save, but be a nitpicking Nancy iffinjuwanna.
 
What part are you having so much difficulty with?

The notion that most civilians would naturally try to save the lives of their best friend or women and children if they were in imminent danger, even if it might jeopardize the mission of their unit? Why the military can't have people deserting their assignments whenever they please in combat situations to do so? The fact that the military does teach ordinary civilians to be soldiers by overcoming these innate responses through rigorous training and exercises by intentionally instilling that sense of loyalty to their unit over everything else?

You do realize this was at least part of the reason why the US military was segregated for so long? That they thought it would be impossible for them to teach racists to overcome their prejudices to act like that in combat situations if the units were mixed racially? Why this is such a critical element to being a cohesive fighting unit? That if you don't really care what happens to the soldier next to you that it could have a massive negative effect?

If you think this you should spend some time reading about the soldiers that have been awarded the Medal of Honor. I think those examples of men going to great lengths to save others pretty much proves you're totally full of crap on your premise.
 
Really, it fits with the sidebar discussion that has been going on regarding choosing who to save, but be a nitpicking Nancy iffinjuwanna.

I suppose it does fit you well to name someone referring back to topic a nitpicking Nancy ...iffinjuwanna.

Now, if you have an actual argument I'd be happy to respond.
 
Grrr! I cannot find the clip of the drowning scene from the movie "I Robot" with Will Smith. I thought it would fit here.
Heh. Yes, it would.

I think Detective Spooner was right. The robot should have tried to save the 11-year-old girl instead of him, even though the odds of her survival were calculated to be much less compared to his own. That would be the human response instead of the coldly calculated one.

Of course, the basic premise of I, Robot also fits the topic of this thread to a great extent. If the US government (with its current sense of morality and ethics, or lack thereof) does someday have the ability to make such complex machines imbued with artificial intelligence, it will all likely be classified to make killing machines so they can continue to warmonger while drastically reducing war weariness due to American casualties. And the robots certainly won't have Isaac Asimov's first law of robotics as their basic tenet:

Robots may not injure or allow human beings to come to harm.

After all, isn't that the primary purpose of DARPA-funded research?
 
Oh, I agree. We defiinitely need more sci-fi scenes posted in this discussion...

Science fiction, when it's any good, often revolves around the question of what it means to be human.

The questions of "who do we save, who do we allow to die, and who do we kill?(torture and drones!)" all revolve around a very basic question - whose lives do we value more than others and why do we do so?

I find it unsurprising that in all of science fiction there are often authors and movie makers who have also attempted to address this very exact same and basic point, if perhaps from another angle that may be helpful to some in sorting out their thoughts and less helpful to others. They have better writers and production budgets than I have access to, sometimes they make a point or ask a question more effectively than I am capable of doing.

If the reference I was trying to make is of no use to you because it has a hypothetical human-like robot making the decision regarding the relative value of two different lives in a contrived but believable and extrapolatable fictional car crash, then I guess I'm sorry?
 
Going off on a tangent very briefly:

One of the things I dislike about Christmad is the presents. You get to determine in very precise terms exactly how much you value the people around you - in monetary terms!

At least that's how it used to be. Nowadays, nearly everyone I know doesn't buy each other presents at all. The rich ones get themselves what they want whenever they want it, and the poor ones go without anyway.

Mind you, there's nothing wrong with buying presents for children - just to see that look of "Is that it?" on their cheery little faces.

Bah! Humbug!
 
Oh, I agree. We defiinitely need more sci-fi scenes posted in this discussion...

Huh, sarcasm coupled with a non-answer ...

Well, I guess my original impression is confirmed. You're not bringing anything to the conversation, and you don't even know what Farm Boy's talking about.
 
Huh, sarcasm coupled with a non-answer ...

Well, I guess my original impression is confirmed. You're not bringing anything to the conversation, and you don't even know what Farm Boy's talking about.

As opposed to your own constructive comments?

As per sci fi bringing anything useful to a discussion on torture and drones: what lesson from drone attacks on and torture of human beings isn't already plain from such events concerning the value of human life?
 
I'm chastising you for being non-constructive and for being rude regarding something you have no idea about. That's, itself, constructive. You can tell that your audience isn't buying your position, based on my feedback.

Your opinionated bluff has been called.
 
game_set_match_v001-772x410.jpg
 
Heh. Yes, it would.

I think Detective Spooner was right. The robot should have tried to save the 11-year-old girl instead of him, even though the odds of her survival were calculated to be much less compared to his own. That would be the human response instead of the coldly calculated one.

Of course, the basic premise of I, Robot also fits the topic of this thread to a great extent. If the US government (with its current sense of morality and ethics, or lack thereof) does someday have the ability to make such complex machines imbued with artificial intelligence, it will all likely be classified to make killing machines so they can continue to warmonger while drastically reducing war weariness due to American casualties. And the robots certainly won't have Isaac Asimov's first law of robotics as their basic tenet:



After all, isn't that the primary purpose of DARPA-funded research?

And I would have said the robot made the only rational decision it could have, if not the human one, and it is the choice I like less. Then again, that path leads to the somewhat tortured conclusion that is the reasoning of the movie's overall antagonist. Maybe a degree of conflict with no clear answer is actually the ground we should be aiming for.
 
You may not value them more...

But, if you had to choose from a cheesehead from Zandfort am See or an Mongolian businessman (do they have those?), to save, who would you save?

I think a lot of us would opt for people you have more (seemingly) in common with...
The Mongolian businessman is more prone to extinction.

I have no idea. Knowing the kind of cowardly bastard I am I'd probably flail my arms and shout: "Somebody should do something!"
 
And I would have said the robot made the only rational decision it could have, if not the human one, and it is the choice I like less. Then again, that path leads to the somewhat tortured conclusion that is the reasoning of the movie's overall antagonist. Maybe a degree of conflict with no clear answer is actually the ground we should be aiming for.

You are forgetting that robots are programmed. The robot in I, Robot is very unlike a drone, which is simply steered by a human. At most a robot can achieve a random decision. If a robot takes a rational decision, it is programmed to do so. If there is any relevance to drone attacks and torture, it is that the latter are conscious actions, whether they are rational or not. One may fantasize about a robot making a conscious decision, it will never happen. In that sense the term artifcial intelligence is flawd.

AI is a totally different subject, worthy of its own thread.

However, if we're done discussing drones and torture here, I'm done here as well. And I haven't seen any new angles lately that don't focus on personal comments (comments on person, not issue).
 
You are forgetting that robots are programmed. The robot in I, Robot is very unlike a drone, which is simply steered by a human. At most a robot can achieve a random decision. If a robot takes a rational decision, it is programmed to do so. If there is any relevance to drone attacks and torture, it is that the latter are conscious actions, whether they are rational or not. One may fantasize about a robot making a conscious decision, it will never happen. In that sense the term artifcial intelligence is flawd.

AI is a totally different subject, worthy of its own thread.

However, if we're done discussing drones and torture here, I'm done here as well. And I haven't seen any new angles lately that don't focus on personal comments (comments on person, not issue).

The point I am trying to make, albeit badly, is not in the AI potential of the robot or drone. It has to do with how we weigh the relative value of human lives on this planet.

Anytime we decide to kill or torture somebody, we have made a conscious decision regarding the value, or lack thereof, that their life possesses. Drones are a particularly relevant concern here. Usually, historically, and generally - if you wanted to kill somebody with your military you had to really want to do it. The person being killed had to be so dangerous that the military was willing to risk whichever members of its own organization were going to be sent on a likely dangerous trip to conduct the killing. An execution was not without risk, you had to really mean it. Drones are a technological innovation that skews this scale. If you can just send a glorified remote controlled airplane to do your dirty work, you can kill about whoever you want with nearly zero risk to the lives of those people who you value more. The decision to kill is now cheaper. You don't have to "mean it" nearly as much before deciding to go ahead and remove somebody from the planet. That makes life in general less valuable, and it's a reasonable point of discussion.

As I Robot pertains to this, what we build is a pretty good indicator of how we think. If we were to build something like the robot from that movie, we would have built something that views human life in an almost entirely egalitarian fashion. Laudable, no? Saving the older man, because it is more likely to be successful, rather than the young girl is an entirely egalitarian concept. He's somebody's father/brother/lover/son too. That decision is supposed to make us feel a little bad, as Formy correctly points out, because egalitarian or not it seems like the less human(and probably) wrong choice to make. It should have tried to save the little girl, because we value her more. That's a somewhat dicey path to trod upon though, if not immediately obvious, when saying "of course innocent little girls are worth more than jaded middle-aged men." It's the same path that eventually leads, should we choose to walk upon it, to us making decisions like "Killing this dangerous man in Yemen isn't worth risking the life of a US Marine. He's worth so little that he can be exterminated like an insect."
 
The point I am trying to make, albeit badly, is not in the AI potential of the robot or drone. It has to do with how we weigh the relative value of human lives on this planet.

Anytime we decide to kill or torture somebody, we have made a conscious decision regarding the value, or lack thereof, that their life possesses. Drones are a particularly relevant concern here. Usually, historically, and generally - if you wanted to kill somebody with your military you had to really want to do it. The person being killed had to be so dangerous that the military was willing to risk whichever members of its own organization were going to be sent on a likely dangerous trip to conduct the killing. An execution was not without risk, you had to really mean it. Drones are a technological innovation that skews this scale. If you can just send a glorified remote controlled airplane to do your dirty work, you can kill about whoever you want with nearly zero risk to the lives of those people who you value more. The decision to kill is now cheaper. You don't have to "mean it" nearly as much before deciding to go ahead and remove somebody from the planet. That makes life in general less valuable, and it's a reasonable point of discussion.

As I Robot pertains to this, what we build is a pretty good indicator of how we think. If we were to build something like the robot from that movie, we would have built something that views human life in an almost entirely egalitarian fashion. Laudable, no? Saving the older man, because it is more likely to be successful, rather than the young girl is an entirely egalitarian concept. He's somebody's father/brother/lover/son too. That decision is supposed to make us feel a little bad, as Formy correctly points out, because egalitarian or not it seems like the less human(and probably) wrong choice to make. It should have tried to save the little girl, because we value her more. That's a somewhat dicey path to trod upon though, if not immediately obvious, when saying "of course innocent little girls are worth more than jaded middle-aged men." It's the same path that eventually leads, should we choose to walk upon it, to us making decisions like "Killing this dangerous man in Yemen isn't worth risking the life of a US Marine. He's worth so little that he can be exterminated like an insect."

It can be far easier. Save your owner, before all others.
 
You are forgetting that robots are programmed. The robot in I, Robot is very unlike a drone, which is simply steered by a human. At most a robot can achieve a random decision. If a robot takes a rational decision, it is programmed to do so. If there is any relevance to drone attacks and torture, it is that the latter are conscious actions, whether they are rational or not. One may fantasize about a robot making a conscious decision, it will never happen. In that sense the term artifcial intelligence is flawd.

AI is a totally different subject, worthy of its own thread.

However, if we're done discussing drones and torture here, I'm done here as well. And I haven't seen any new angles lately that don't focus on personal comments (comments on person, not issue).

It would be programmed by a human and thus when it came to making a choice it would be programmed to make a choice that a human would since a human wrote the program in the first place.
 
Back
Top Bottom