Like any rule, the rules of robotics are flawed. The first
law is as follows: No robot may harm a human being, or through inaction, allow
a human being to come to harm. However, great danger will ensue if this rule
comes to read: no robot may harm a human being. This rule leaves room for the
unintentional harming of humans. The combination of flawed rules with robots
leads to chaos because robots behave like children, who are in the beginning
stages of development. Like children, robots are learning new knowledge with
each experience. Speaking of robots, Asimov writes, “They tell you when they
think you’re wrong, though. They don’t know anything about the subject but what
we taught them, but that doesn’t stop them” (p. 147). After reading this, I
immediately thought of my little cousins who think they know everything about everything
despite their youthful age. If I told my cousins to “go lose themselves,” they
would probably respond much like the robot who goes into hiding because they do
not know any better, but they do know how to decipher language which is what
the robots have learned to do through their interactions with humans. Through
their interactions with humans, robots have also learned how to lie which
points to the danger of knowledge, a reoccurring theme in science fiction
novels like “Frankenstein.” It is the robots’ ability to learn and grow from
their experiences that poses a threat to the human race.
No comments:
Post a Comment