conflict with the First or Second Law.
Well, then, it seems to me that a robot could not help but think that human
beings ought to behave in such a way as to make it easier for robots to obey
those laws.
In fact, it seems to me that ethical human beings should be as anxious to make
life easier for robots as the robots themselves would. I took up this matter
in my story “The Bicentennial Man,” which was published in 1976. In it, I had
a human character say in part:
“If a man has the right to give a robot any order that does not involve harm
to a human being, he should have the decency never to give a robot any order
that involves harm to a robot, unless human safety absolutely requires it.
With great power goes great responsibility, and if the robots have Three Laws
to protect men, is it too much to ask that men have a law or two to protect
robots?”
For instance, the First Law is in two parts. The first part, “A robot may not
injure a human being,” is absolute and nothing need be done about that. The
second part, “or, through inaction, allow a human being to come to harm,”
leaves things open a bit. A human being might be about to come to harm because
of some event involving an inanimate object. A heavy weight might be likely to
fall upon him, or he may slip and be about to fall into a lake, or any one of
uncountable other misadventures of the sort may be involved. Here the robot
simply must try to rescue the human being; pull him from under, steady him on
his feet and so on. Or a human being might be threatened by some form of life
other than human — a lion, for instance — and the robot must come to his
defense.
But what if harm to a human being is threatened by the action of another human
being? There a robot must decide what to do. Can he save one human being
without harming the other? Or if there must be harm, what course of action
must he pursue to make it minimal?
It would be a lot easier for the robot, if human beings were as concerned
about the welfare of human beings, as robots are expected to be. And, indeed,
any reasonable human code of ethics would instruct human beings to care for
each other and to do no harm to each other. Which is, after all, the mandate
that humans gave robots. Therefore the First Law of Humanics from the robots’
standpoint is:
1 — A human being may not injure another human being, or, through inaction,
allow a human being to come to harm.
If this law is carried through, the robot will be left guarding the human
being from misadventures with inanimate objects and with non-human life,
something which poses no ethical dilemmas for it. Of course, the robot must
still guard against harm done a human being unwittingly by another human
being. It must also stand ready to come to the aid of a threatened human
being, if another human being on the scene simply cannot get to the scene of
action quickly enough. But then, even a robot may unwittingly harm a human
being, and even a robot may not be fast enough to get to the scene of action
in time or skilled enough to take the necessary action. Nothing is perfect.
That brings us to the Second Law of Robotics, which compels a robot to obey
all orders given it by human beings except where such orders would conflict
with the First Law. This means that human beings can give robots any order
without limitation as long as it does not involve harm to a human being.
But then a human being might order a robot to do something impossible, or give
it an order that might involve a robot in a dilemma that would do damage to
its brain. Thus, in my short story “Liar!,” published in 1940, I had a human
being deliberately put a robot into a dilemma where its brain burnt out and
ceased to function.
We might even imagine that as a robot becomes more intelligent and self-aware,
its brain might become sensitive enough to undergo harm if it were forced to