off. I did not have to try writing such a story to know that it would be a
particularly difficult one to write and that I couldn’t do it. At least, not
then.
In 1976, however, I finally tackled the job and wrote “The Bicentennial Man. “
It dealt essentially with a robot that became more and more human, without
ever being accepted as a human being. He became physically like a human being,
mentally like a human being, and yet he never crossed the line. Finally, he
did, by crossing the last barrier. He made himself mortal, and as he was
dying, he was finally accepted as a human being.
It made a good story (winning both the Hugo and the Nebula) but it didn’t
offer a practical way of distinguishing between robot and human being, because
a robot couldn’t wait for years to see if a possible human being died and thus
proved itself to be a human being.
Suppose you are a robot and you have to decide whether something that looks
like a human being is really a human being, and you have to do it reasonably
quickly.
If the only robots that exist are primitive, there is no problem. If an object
looks like a human being but is made of metal, it is a robot. If it talks in a
mechanical kind of voice. moves with awkward jerky motions, and so on and so
on, it is a robot.
But what if the robot looks, superficially, exactly like a human being (like
my robot, Daneel Olivaw), How can you tell that he’s a robot? Well, in my
later robot novels, you can’t, really. Daneel Olivaw is a human being in all
respects except that he’s a lot more intelligent than most human beings, a lot
more ethical, a lot kinder and more decent, a lot more human. That makes for a
good story, too, but it doesn’t help identify a robot in any practical sense.
You can’t follow a robot around to see if it is better than a human being, for
you then have to ask yourself—is he (she) a robot or just an unusually good
human being?
There’s this—
A robot is bound by the Three Laws of Robotics, and a human being is not. That
means, for instance, that if you are a human being and you punch someone you
think may be a robot and he punches you back, then he is not a robot. If you
yourself are a robot, then if you punch him and he punches you back, he may
nevertheless be a robot, since he may know that you are a robot, and First
Law does not prevent him from hitting you. (That was a key point in my early
story, “Evidence.”) In that case, though, you must ask a human being to punch
the suspected robot, and if he punches back he is no robot.
However, it doesn’t work the other way around. If you are a human being and
you hit a suspected robot, and he doesn’t hit you back, that doesn’t mean he
is a robot. He may be a human being, but a coward. He may be a human being but
an idealist, who believes in turning the other cheek.
In fact, if you are a human being and you punch a suspected robot and he
punches you back, then he may still be a robot, nevertheless.
After all, the First Law says,..A robot may not harm a human being or, through
inaction, allow a human being to come to harm.” That, however, begs the
question, for it assumes that a robot knows what a human being is in the first
place.
Suppose a robot is manufactured to be no better than a human being. Human
beings often suppose other people are inferior, and not fully human, if they
simply don’t speak your language, or speak it with an odd accent. (That’s the
whole point of George Bernard Shaw’s Pygmalion.) In that case, it should be
simple to build a robot within whom the definition of a human being includes
the speaking of a specific language with a specific accent. Any failure in
that respect makes a person the robot must deal with not a human being and the
robot can harm or even kill him without breaking the First Law.
In fact, I have a robot in my book Robots and Empire for which a human being