Voting Like a Robot
Isaac Asimov (1920-1992) was one of the most prolific writers of the 20th century, having authored or edited more than 500 books. The level of his genius is evidenced by the fact that those works have been published in 9 of the 10 major categories of the Dewey Decimal Classification systems used in libraries. A professor of biochemistry, he wrote nonfiction in popular science, science textbooks, essays and literary criticism. He is, however, probably more well know for his hard science fiction, mystery, and fantasy writings. A contemporary of Robert Heinlein and Arthur C. Clarke, Asimov was considered one of the “Big Three” writers of science fiction during his lifetime.
One of his more well known science fiction series is the Robot series, a collection of 38 short stories and 5 novels, the first one being I, Robot. Yes, this is the book upon which the 2004 movie of the same name, starring Will Smith, is based. Alas, the title and Dr. Susan Calvin are about the only things in common between the book and movie. Read the book; it’s more interesting.
That said, “the unique feature of Asimov’s robots are the Three Laws of Robotics, hardwired in a robot’s positronic brain, with which all robots in his fiction must comply, and which ensure that the robot does not turn against its creators.” And again, yes, Trekkies, Lieutenant Commander Data’s positronic brain originated with Asimov, not the creator(s) of Star Trek Next Gen (or any other Star Trek version for that matter). It is to those three Laws of Robotics that I want to turn our attention to at this point.
These three laws were essentially how Asimov solved the problem (and introduced some very interesting unexpected consequences…see the Robot series) of how to define and constrain robotic behavior in such a way that humans would not, indeed, could not be harmed by their electronic creations. Let’s look at those three laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
What I want you to notice is that these laws form a hierarchy. The first law supersedes and overrides the other two. The second law is just as important as the first, but if obedience to the second law conflicts with the first law, the first law “wins.” Likewise, the third is dominated by the first and second. Think about it. In order to ensure the most benefit, robotic behavior cannot be defined by only one principle. It requires several, and they must be structured and inter-dependent in their relationships.
In a similar vein, how does God go about defining human behavior that pleases Him? Does He have only one principle for us to obey? No, He summaries it in two that are also hierarchical (Matthew 27:37-40): loving God and loving our neighbor. Loving God supersedes loving our neighbor, but both are important. And these two actually summarize what, when presented with more detail, require ten such principles for human conduct (Exodus 20:1-17).
OK, you say, so what does this have to do with voting??? Patience, dear reader. We’re almost there.
Based on the above considerations, I would assert that most complex behaviors and decision processes, of which voting is one, can not be determined by applying only one principle. Unless you have more than the wisdom of Solomon, no one principle will encompass all possibilities that can be encountered. Applying this to voting: for those who say they cannot vote for a candidate with whom they have principled disagreements (i.e., they must vote their conscience; dare I point out how nebulous and subjective “conscience” can be at times?), they are really attempting to apply just one principle to the process in a simplistic and naïve fashion: if a candidate doesn’t share my values, and have shared them for an adequate period of time so that I know he’s really a photocopy of me, then I can’t vote for him. The reason this is simplistic and naïve is simple: unless you personally run for office and vote for yourself, there is no one candidate that will agree with you 100% on every issue, let alone on all the issues you may want to list as important to you.
It is rare that we will have someone who shares all our positions and values, so what do you do? First, acknowledge that you can seek to vote for the one who comes the closest even if that is still so far distance from you that you have to hold your nose to do so. Second, it is perfectly all right, indeed, a duty to vote for someone as a vote to prevent someone else who is far worse from taking office.
So I would propose the following Three Laws of Voting, tailored after the Three Laws of Robotics and with a bow to Dr. Asimov:
- A voter may not injure his/her country or, through inaction, allow his/her country to come to harm.
- A voter must place a vote for the candidate who conflicts least with the First Law.
- A voter must protect his/her own conscience as long as such protection does not conflict with the First and Second Laws.
- Perhaps not as eloquent as Dr. Asimov, but still more inclusive of the possible eventualities we might face in elections in this country than the simplistic “only vote for your twin” that many are seeking to apply this election cycle.
Think about it.
Much has been written on the comparative damage the two candidates from the two parties could do to the country, so I’ll not rehearse those considerations here. Suffice it to say that Clinton would do more, being more of Obama’s destructive policies than Trump. Voting for Clinton violates the Second Law. Voting for a third party candidate violates the First and Second Law, primarily because it would ensure another Clinton presidency. Voting for Trump might require a nose pin to withstand the stench, but it would not violate any of these laws (and I did not vote for Trump in the primaries). Regardless, please don’t…