//
you're reading...
Byte into thoughts

Never fear, only humans can kill

The Pentagon recently released a statement, ostensibly to make people feel more secure or to show some degree of humanity, that, despite the increasingly capable technology, only a human being will decide to kill another human being.  This policy assumes that human judgement over decisions to kill people is always superior to a machine’s.

One simple reason for relying on humans could be the worry that a machine will somehow malfunction and cause innocents to die. While such a malfunction certainly could happen, the risk that it will is likely very low.  The United States military is known for extensive resources and thorough research, which decreases the probability of a malfunction.  Also, the military already uses technology with varying degrees of automation that could kill people, yet such a model does not appear to be a public concern.

Humans are not without a history of malfunctioning. People have been condemned to death for crimes they did not commit. Does the fact that a jury of their peers issues the death sentence make a wrongful death any more right? Injuries and death due to friendly fire do happen. Myths and the Bible provide an extensive litany of human error.

After agreeing that both humans and machines are subject to error consider the tricky thing about human vs. machine error.  Machines, as compared to humans, are much easier to monitor. Many times a machine provides information about what is and isn’t working and then new parts can be ordered or new code written.  Humans provide no such information and may even be unaware that something within themselves isn’t working. Mistakes caused by poor judgement, prejudice, insufficient intelligence or motivation cannot be as easily fixed.  Yes, a person can be fired but it is difficult to determine to a degree of certainty, comparable to a machine, that the new hire won’t have the same, or differently debilitating, problems.

Then there’s the thinking that since humans understand what it means to be human that they can take a life with the appropriate amount of moral gravity. If you subscribe to a consequentialist mentality, which values outcomes only, then whether the death of a human is a good or bad thing deserves the judgement, not who or what performed the act.

If consequentialism does not appeal, the idea that humans kill others with a sense of moral gravity deserves skepticism.  Consider the recent social media campaign from the IDF which tweeted grizzly photos and inflammatory messages or the human propensity for torture from the rack and iron maiden all the way to Guantanamo Bay or gore as entertainment like public hangings or the Saw movies.

Even poets, who are often considered humanity’s best bet at understanding itself, find themselves removed from killing. Some of the best poetry from WWI talks about the shock of realizing the enemy is, in fact, a person. Firing the gun the next day then requires forgetting such a discovery.

Finally, those in the military are not noted, or hired, for their judgement: those people are judges. Rather, those in the military are expected to follow commands. Humans are certainly capable of following orders, but can they really, compellingly, rival machines?

This is not to say that the military should become automated. Rather decisions that affect lives deserve serious attention and blanket assumptions deserve examination.

 

Advertisements

Discussion

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: