By the final story in I,
Robot, the robots had become so advanced and intelligent that they no
longer could be understood or controlled by human beings. We have talked a lot
in this class about humans creating something “superior” to themselves and how
that is a fallacy in itself, but in all of those cases—Frankenstein and earlier
versions of the robots—humans retained some characteristic that differentiated
themselves from the robots. In Frankenstein it was beauty, in R.U.R. it was
emotion, and in early versions of the I,
Robot robots it was flaws in the robotic design that made them problematic
in some way. By the end of I, Robot,
however, they had become superior to humans in every possible way: computing
power, eloquence, and overall intelligence were all far above that of human
beings. Stephen could even, theoretically, be elected as a human politician when
he may very well have been a robot in disguise.
The
only thing that kept the robots “below” people, in the sense that humans still
had some semblance of control, were the three laws. They forced robots to obey
human beings and stopped them from harming humans at all costs. However, by the
final story in the novel, robots had averted the first law by changing “humans”
to mean “humanity”. Robots could not protect humanity without protecting
themselves because human kind had become so dependent on robots to avoid any
conflict. In other words, robots allowed themselves to be the dominant
“species” by justifying their rule under the first law of robotics. The laws
were put in place by people out of fear of the robots “rising up”, but in the
end the robots were able to use the laws to justify exactly what the laws were
put in place to prevent.
I disagree with your final statement. I don't believe that the robots found a way to get around the Three Laws in order to loosen the leash around their necks. I think that the robots and machines improved the laws by altering the definition to a more utilitarian one. Utilitarianism is a very powerful philosophical tool when contemplating moral dilemmas. The theory is flawed when personal interest is a factor. Its hard to act for the betterment of the greater good when doing so would cause someone you care about to suffer. Well this is a 'deficiency' that robots don't suffer from. Robots are unable to make personal connections and therefore are able to act in the best interest of all the human beings on the planet rather than the one's close to them. I think that this quality is more valuable as a companion when considering the wellbeing of humanity as a whole.
ReplyDelete