Monday, April 7, 2014

Who Controls the Destiny of Humanity?

Towards the end of the novel Byerley, while talking with Susan Calvin, concludes that unlike robots, machines don't serve one particular individual, but instead serve humanity as a whole, modifying the first law of robotics to “No machine may harm humanity; or through inaction, allow humanity to come to harm” (269).  Calvin then cleverly asks Byerley what it is that harms humanity. The two agree that a stable economic foundation is key to a well functioning society, but I believe that a great deal of the answer to this question also depends on fundamental aspects of human nature. The problem that has repeatedly arisen in every chapter is centered around inherent human foible. As was the case with Robbie, Cutie, Herbie and other robots, the human ego either becomes a hindrance to the functionality of the robot, or humans simply refute  their capabilities. Time and time again, however, the robots or machines prove to function remarkably well, exceeding the capabilities of humans in almost every task they are challenged with. Calvin notices this and decides that machines have been so well refined that they account for human error in their calculations, and follow the first law by anticipating how humans will respond to their data. This seemingly elaborate explanation is actually a very sage observation, and leads me to believe that Asimov is arguing that in order to eradicate imperfections from society, we must first eradicate imperfections from humanity. Changing human nature would be an impossible task so instead we construct machines that shelter us from our own shortcomings, but this happens unknowingly for humans. All of mankind befalls to a condition where ignorance is bliss. Machines are at the helm of a ship bound for a destination mankind chose, but that bears the question of whether we chose the right destination, despite our inherent flaws.

1 comment:

  1. I agree with the overall argument that you make, and especially all the more so with the point you made about humans assuming the role of asking the right question, i.e "choosing the right destination" while robots assuming the role of churning out the answer that serves the utility of the greater mass. Assuming that robots, at their apex of evolutions, remain to function remarkably well, even if that means maneuvering around the three Laws of Robotics in ways unimaginable to humans, all that is left depends on the limiting factor that is us human beings and our error-prone nature. Humans, then, are in charge of their own destiny, but sadly not in a grandeur type of way we like to imagine ourselves to be.

    ReplyDelete