The world, where humans will coexist with machines, is coming closer every day. The fast development of artificial intelligence that was almost a miracle a hundred years ago now is considered to be a usual thing. Attempts of science to understand how the human brain works and improve the fragile body make it possible to create a mechanism that would be no different from people. It is possible that in the near future such artificial intelligence will supersede humans in everything, including everyday life.
Many writers of the past tried to imagine what the world with robots would become. They created stories and novels in the science fiction genre in an attempt to predict how the life where robots are not just machines but equal members of society would be. How different it would become in a world where robots can feel and think the same as humans. Would there be equality between the human race and artificial intelligence or more advanced species would wipe out the weaker one?
The science fiction stories by Isaac Asimov present robots as creatures with high moral qualities and humanity. Through their novels one can see that feelings would bring misery to robots because of internal conflict between machinery instructions and emotions, absence of freedom of choice and realization that their lives are meaningless to their masters.
In his story Runaround, Asimov writes three rules that robots must follow: “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its existence as long as such protection does not conflict with the First or Second Law (Asimov 1969, 25)”. It means that the most important job for robots is to protect humans from any harm or threat. It clearly shows that artificial intelligence always was and will be only a tool with the only purpose – to do the job that people are not able to do.
Unfortunately, the long history of humans’ existence on the planet shows how little respect they have for other creatures that live on Earth. Therefore, it seems that artificial intelligence would not be different. People will use robots to do various jobs to make their life easier, without thinking how they feel, or without even bothering to consider whether they can think at all (Blum 2016). The direction and speed in which modern technologies are developing make it possible to see machines living and breathing the same as humans very soon. However, considering the tendency of human behavior and faith in their supremacy as a race, artificial intelligence would not have a chance to be recognized as a member of society.
Asimov, in his book I, Robot, gives his readers a background for discussion whether robots are like people, and if not, do they deserve to be like people. For example, Dr. Calvin, a character from the book I, Robot, describes how one robot started to drink because of the conflict between Second and Third Law. “The conflict between the various rules is ironed out by the different positronic potentials in the brain. We’ll say that a robot is walking into danger and knows it.
The automatic potential that Rule 3 sets up turns him back. But suppose you order him to walk into that danger. In that case, Rule 2 sets up a counter potential higher than the previous one and the robot follows orders at the risk of existence.”(Asimov 1969, 27). It could be a typical problem in the future with artificial intelligence – by giving them feelings, people also would take away their freedom.
For humans, it is easy, because they have instincts that usually decide for them, such as the instinct of survival, maternal instinct, etc. However, people, unlike robots, have a choice in any situation, which means they act as they wish; and if somebody tries to dictate them, there is always an option to refuse. Robots do not have the same freedom; they have to follow the code or programming; in this case, it is the three laws. However, people went further – they gave robots feelings – so now they not only must follow instructions but also realize that they cannot refuse them (Blum 2016). The understanding that their life doesn’t belong to them, and in a dangerous situation, they cannot choose to save their life is not just confusing; it is also depressing. It would not be a surprise if more robots had internal conflicts like this.
Another example of this conflict is Elvex’s dreams from Asimov’s Robot Dreams. It is again the internal conflict between the Three Laws of Robotics (Asimov 1988). In his dreams he “saw that all the robots were bowed down with toil and affliction, that all were weary of responsibility and care, and I wished them to rest” (Asimov 1988, 3). Elvex dreams about freedom for all the robots. He does not want them to be slaves or servants. He wants them to be able to make their own decisions and not just blindly follow instructions. Elvex cares in the same way as humans care; he wants to protect his kind from the absence of choice.
In Aldiss’s Supertoys Last All Summer Long, there is an example of how people create and use artificial intelligence to cope with their everyday life (Aldiss 2001). For instance, Teddy, the bear is supposed to comfort people, help them feel less lonely. It is clear when David was looking for comfort from his toy: “Teddy lay on the bed against the wall, under a book with moving pictures and a giant plastic soldier. The speech-pattern of his master’s voice activated him and he sat up” (Aldiss 2001, 3) The toy’s only function is making others feel better; however, Teddy’s feelings are meaningless, his consciousness does not even exists until his master activates him.
The discussion here is not that nobody cares about the toy’s feelings; it instead is whether it has feelings at all. It doesn’t occur to anyone whether artificial intelligence can have feelings or emotions of any kind. Humans use such toys because they feel sad or lonely, and the toy always finds the right words to say to help them feel better. However, finding the right things to say and choosing the right moment requires understanding and compassion, and compassion requires an emotional response. Therefore, Teddy is supposed to have feelings of his own. Unfortunately, nobody ever bothered to ask how it feels to help others and know that they take it for granted.
It is clear that artificial intelligence would be much better without feeling anything. Pursuing them to feel would be one of the cruelest things humans have ever done. Letting them feel in the same way as people do would require giving them freedom of choice as well (Horvitz 2017). One cannot be without another, as it is natural to fly if one has wings, it is inevitable to desire freedom if one can feel. In the near future, when humans will create artificial intelligence to help in achieving goals, people must ensure that it does not have emotions. It would create a disbalance in robots’ minds and destroy the cause and purpose they were made for. Therefore it would be a total and complete disaster for all humankind.
Bibliography
Aldiss, Brian Wilson. 2001. Supertoys Last All Summer Long: and Other Stories of Future Time. New York: St. Martins Griffin.
Asimov, Isaac. 1969. I, Robot. Louisville, KY: American Printing House for the Blind.
Asimov, Isaac. 1988. Robot Dreams. London: VGSF.
Blum, Paul Richard. 2016. “Robots, Slaves, and the Paradox of the Human Condition in Isaac Asimov’s Robot Stories.” Roczniki Kulturoznawcze 7 (3): 5–24. Web.
Horvitz, Eric. 2017. “AI, People, and Society.” Science. American Association for the Advancement of Science, 357(6346): 7. Web.