means that some do at some times. Others may do it at other times, or may not do it at all.
If we have to wait for actual laws prescribing human behavior in order to establish psychohistory
(and surely we must) then I suppose we will have to wait a long time.
Well, then, what are we going to do about the Laws of Humanics? I suppose what we can do is to
start in a very small way, and then later slowly build it up, if we can.
Thus, in Robots and Empire, it is a robot, Giskard, who raises the question of the Laws of
Humanics. Being a robot, he must view everything from the standpoint of the Three Laws of
Robotics — these robotic laws being truly prescriptive, since robots are forced to obey them and
cannot disobey them.
The Three Laws of Robotics are:
1 — A robot may not injure a human being, or, through inaction, allow a human being to come to
harm.
2 — A robot must obey the orders given it by human beings except where such orders would
conflict with the First Law.
3 — A robot must protect its own existence as long as such protection does not conflict with the
First or Second Law.
Well, then, it seems to me that a robot could not help but think that human beings ought to
behave in such a way as to make it easier for robots to obey those laws.
In fact, it seems to me that ethical human beings should be as anxious to make life easier for
robots as the robots themselves would. I took up this matter in my story “The Bicentennial Man,”
which was published in 1976. In it, I had a human character say in part:
“If a man has the right to give a robot any order that does not involve harm to a human being, he
should have the decency never to give a robot any order that involves harm to a robot, unless
human safety absolutely requires it. With great power goes great responsibility, and if the robots
have Three Laws to protect men, is it too much to ask that men have a law or two to protect
robots?”
For instance, the First Law is in two parts. The first part, “A robot may not injure a human
being,” is absolute and nothing need be done about that. The second part, “or, through inaction,
allow a human being to come to harm,” leaves things open a bit. A human being might be about
to come to harm because of some event involving an inanimate object. A heavy weight might be
likely to fall upon him, or he may slip and be about to fall into a lake, or any one of uncountable
other misadventures of the sort may be involved. Here the robot simply must try to rescue the
human being; pull him from under, steady him on his feet and so on. Or a human being might be
threatened by some form of life other than human — a lion, for instance — and the robot must
come to his defense.
But what if harm to a human being is threatened by the action of another human being? There a
robot must decide what to do. Can he save one human being without harming the other? Or if
there must be harm, what course of action must he pursue to make it minimal?
It would be a lot easier for the robot, if human beings were as concerned about the welfare of
human beings, as robots are expected to be. And, indeed, any reasonable human code of ethics
file:///E|/Documents%20and%20Settings/Princess%2...0Robot%20City%20Book%202%20-%20Mike%20McQuay.htm (3 of 108)11/19/2005 3:49:05 AM