Machines Cannot Have Independent Feelings

When contemplating the man-like intelligence of machines, the computer immediately comes to mind, but how does the ‘mind’ of such a machine compare to the mind of man, and precisely what is meant by the term ‘mind’? A human brain assimilates and processes in much the same way as a computer. However, because the mind of man possesses consciousness, it perceives beauty, generates moral judgments, and formulates rationalizations which the machine cannot execute. Essentially, that is the difference; the human ‘mind’ is conscience, the fundamental component of independent ‘feeling,’ the machine’s ‘mind’ is not.

When the computer was in its early development stages, it was thought of as an electronic, thinking device, the mechanical equivalent of the human brain. This misconception is a gross oversimplification of the seemingly limitless boundaries of the human mind. The potential of machines was thought to be able to eventually encompass “an inductive and creative mind, capable of taking initiative, to which human beings could confide all their problems and obtain instant solutions in return” (Ifrah, 1997: 1679). This misconception in the early days of the computer has carried over even to today. This discussion examines the functions of the computer brain along with the similarities and differences to the human brain in addition to the meaning of knowledge and the limitations of machines as compared to the human mind. It also delves into the complex definition of consciousness so as to illuminate the distinctions between the human mind and the ‘mind’ of the machine. Throughout, it confirms that machines cannot possess independent human emotions or feelings as defined in the traditional meaning.

Simply stated, computers are machines that effectively carry out algorithmic functions. The machine discerns formalized input through a sequence of fixed stages through a predetermined, straightforward set of rules of a standardized and exacting description. This allows computers to perform procedures in a precise number of steps. Mechanical computers, unlike the functions of the computer-like brain, do not have the capability to determine right from wrong nor can they make judgments, have no feelings, and cannot think on their own. It cannot be denied that some types of intelligence can be attributed to computers but this capacity is very limited when balanced against the boundless intricacies in a human’s brain. However, the computer is superior when considering its capability to process information at a higher speed. This has provided humans a useful tool for a myriad of endeavors. Nevertheless, computers cannot reason, imagine, invent, create, express thoughts, manage ideas, make judgments or possess the ability to adapt to different situations and therefore cannot solve problems that are new to them. Unlike the human brain, computers aren’t conscious of their own being, have no concept of the world around them, and cannot execute voluntary activities (Ifrah, 1997: 1616).

Because machines are only able to follow directives, they do not possess the capability to be self-aware. Conversely, if it is accepted that computers do not and will never become aware of their own being, then it is reasonable to ask what enables the human’s biological machine to attain consciousness while the silicon-based computerized ‘brain’ cannot. Possibly, the answer to this question is the fact that the structure of the human brain is self-organizing. It responds to the individual characteristics and the independent nature of interactions between itself and the particular environment. Computers do not have the ability to accomplish this. However other natural, biological systems such as many types of simple ‘animals’ and all plant life encompass a multifaceted, self-organizing interrelationship within its inner mechanism yet are also not aware of themselves. This indicates that though self-organization is an essential precondition for consciousness, it’s insufficient by itself.

The solving of a specific problem, which requires generalization or searching is usually thought to be an indication of artificial intelligence, which is understood to have an ‘all or none’ characteristic. Biological intelligence, on the other hand, includes progression. Even the less complex brains of animals can be differentiated from computers so as to illustrate the vast difference between human and mechanized brains. The function of animals depends largely on customary behaviors. These functions define a progression. It can be reasoned that many types of animals possess at least a degree of human-like intelligence because they are able to exist in their biological environment. “In cognitive tasks of the kind normally associated with human intelligence, animals may perform well. Thus rats might find their way through a maze, or dolphins may solve logical problems or problems involving some kind of generalization” (Kak, 2005).

In general terms, it is assumed that the activities that differentiate human thought from that of a machine’s conceptualization are best characterized by the understanding of language. Although it cannot be denied that those who are deaf or mute do in fact think, they do not speak at the same level as others. In addition, studies have shown that most types of animal life have the capability to learn and solve problems. The use of language is part of a compartment of a larger inventory of behaviors. Computers do not possess the ability of humans or even of animals to formulate or initiate any type of language on their own. Computer ‘language’ is pre-programmed. The use of uninitiated language, no matter how primitive, is within the realm of biological beings alone. The machine will never possess independent language thus thought and feelings.

Machines are lacking in two major areas when compared to the human brain. Machines, as opposed to brains, are unable to self-organize via a recursive method. Second, machines are founded on conventional logic but human intelligence depends upon quantum mechanics which provides a means of acquiring information regarding a technique connected with a variety of attributes. “A quantum state is a linear superposition of its component states. Since the amplitudes are complex numbers, a quantum system cannot be effectively simulated by using random numbers. One cannot run a physical process if its probability amplitude is negative or complex” (Kak, 2005).

Studies of neuroscience substantiate how particular parts of the human brain are devoted to various cognitive responsibilities. But these parts of the brain do not simply act to control signal processing; each functions within the world of its own unique experience and has the ability to generalize on an individual basis. This generalization process maintains new experiences and relates these to further cognitive activities within the brain.

When the neurological method of brain activity is understood, it becomes apparent that the cognitive ability of the human brain cannot be reduced to the algorithmic, mechanical method by which computers operate. Viewed separately, each cognitive process is an operation that integrates into the ‘universal field of consciousness.’ Conversely, machines are based on classical computing principles and have a fixed universe of discourse so they are unable to adapt in a flexible manner to a changing universe” (Kak, 2005). This is why they cannot now or will in the future match similar biological intelligence. The quantum theory provides an understanding of how biological processes cannot be explained in similar terms as is the mechanical process.

The protein sequencing progression is an example. Proteins, which are chains of amino acids, quickly fold into a specific structure that ultimately establishes its particular function within an organism. A high-speed computer would take over ten thousand years, it has been estimated, to apply a realistic set of rules for protein sequencing that would develop into the correct form even in a short chain of amino acids. However, natural biological functions take only a matter of seconds to correctly resolve the problem. This is because quantum natural computations are much quicker than mechanical computations. “The anomalous efficiency of other biological optimization processes may provide indirect evidence of underlying quantum processing if no classical explanation is forthcoming” (Fraenkel, 1999).

Though the human mind travels very quickly, somehow, it is able to travel even quicker than studies have shown is possible. Individuals know, or can sense, the information they did not previously have access to, commonly called intuition. A machine cannot accomplish this feat. A computer has the ability to discover previously unknown knowledge but it can not crave an answer and cannot conceptualize the existence of unknown knowledge. If individuals could pinpoint the origination of the craving for unknown knowledge in the brain, they could possibly translate this to artificial intelligence mechanisms. But, to date, no one is close to locating these origins. Science has not discovered why the human brain yearns for what it doesn’t know. The fact remains that the phenomenon does exist but not in the computer brain which knows only what it knows and nothing else. Consciousness exists only in the realm of the living which permits the knowledge of what is not known (Rosenblatt, 1982). Thus, the computer brain is limited by the human mind which is limitless, a very stark and profound contrast.

The concept of consciousness has baffled philosophers and scientists throughout the ages. The answer to the question ‘what is consciousness?’ promises an explanation of what humans are as opposed to machines and even other living creatures on the planet, the key to the mystery that places us on a higher plane of existence. In seeking the answer to this question, several theories have been proposed that either affirm the existence of such a concept or that attempt to explain where this elusive seat of the self might hide within the human form. In attempting to determine a solid definition of the term ‘consciousness’, one will discover it is a very difficult idea to pin down in specific words.

“Like most words, ‘consciousness’ does not admit of a definition in terms of genus and differentia or necessary and specific conditions” (Searle, 1999). When attempting to determine a specific definition, distinctions are made between what Sigmund Freud termed as ‘conscious’ and ‘unconscious’ which helps to clear up the issue somewhat. Conscious thoughts are generally recognized as those that deal with identifying the textures and feelings of the various objects around the physical body, the plans one might have for how the day should be spent, or the daydreams of what the future might hold. Other thoughts, such as those that control one’s heartbeat and breathing, determining which muscles must be used in order to pick up a pencil, or placing the words one is about to speak in the appropriate order, tend to fall more into the realm of the unconscious.

Consciousness allows for the acknowledgment of beauty which is known only to those that possess biological intelligence. Aesthetics value has very little in common with the processing of information. Beauty is known, but knowing this information is not a process of mathematical computations. Both the brain and the computer can add numbers but the computer is not impressed with this knowledge nor does it feel pride in accomplishing new tasks such as the biological mind might. The reasoning for why the brain knows to perform a function then knowingly yearns for more knowledge or finds the procedure a fulfilling experience remains unclear. The computer, by contrast, only knows to perform the function when prompted. It has no contemplations regarding the knowledge of the experience. The human mind can contemplate its own functions and existence. It may also think that the various functions of it or a computer is a wondrous, beautiful event, this, along with the fact that the machine produces predictable results, remains among the chief factors that separate the two processing entities. However, this viewpoint is a superficial observation of the human mind because there remains much more regarding the mysteries of the brain as opposed to the mechanical function of the computer. The brain has the ability to reject new knowledge where the computer does not. This allows for an aura of individuality that machines do not enjoy (Clear, 2003).

Knowledge should not be confused with consciousness because there are too many states in which knowledge plays too little apart. These states include anxiety for no apparent reason or nervousness that has no direct connection with knowledge (Searle, 1999). In addition, although it has been described as something characterized by focused thought, there remain too many exceptions to the rule for this to be an adequate working definition. “Within one’s field of consciousness, there are certain elements that are at the focus of one’s attention and certain others that are at the periphery of consciousness” (Searle, 1999).

For example, when one is consciously focused on completing a work assignment, they remain aware of background noise, the itch of the wool of their clothing, or the slightly uncomfortable temperature of the room. Self-consciousness also does not equate with the concept of consciousness because to be aware of sudden sounds or uncomfortable temperatures does not necessarily indicate that one is self-aware enough to feel shame at a wrong action. For purposes of this discussion, then, the words of John Searle (1999) will provide our working definition of the term. “By ‘consciousness’ I simply mean those subjective states of sentience or awareness that begin when one awakes in the morning from a dreamless sleep and continue throughout the day until one goes to sleep at night or falls into a coma, or dies, or otherwise becomes, as one would say, ‘unconscious’.” This definition eliminates the problem of the dream state and limits the discussion to humans, as the only beings that can irrefutably demonstrate sentience on a high level.

While consciousness has been scientifically linked with the physical properties of the brain and therefore emerges as little more than a biological process that does not mean it lacks any features that make it unique among other biological features. Searle pinpoints the concept of subjectivity as a case in point. “There is a sense in which each person’s consciousness is private to that person, a sense in which he is related to his pains, tickles, itches, thoughts, and feelings in a way that is quite unlike the way that others are related to those pains, tickles, itches, thoughts and feelings” (Searle, 1999).

The way that these impresses feel to each individual can be described as a conscious state; this state is also referred to as qualia. “Qualia include the ways things look, sound and smell, the way it feels to have a pain, and more generally, what it’s like to have experiential mental states … Qualia are experiential properties of sensations, feelings, perceptions and, more controversially, thoughts and desires as well” (Gregory, 2004: 1). In the face of the physical reality of consciousness, scientists continued to point to qualia as evidence of a non-physical aspect to the concept. “The subjective feature of conscious mental processes – as opposed to their physical causes and effects – cannot be captured by the purified form of thought suitable for dealing with the physical world that underlies the appearances” (Nagel, 1986: 13). This subjective state in which each individual has different impressions of pain, joy, warmth, and caresses is what has convinced us that there is such a thing as consciousness yet also makes it difficult to determine just what the true nature of this phenomenon might be.

As if the definition wasn’t hard enough, the real difficulty enters the equation when one attempts to explain the ‘why’ of consciousness. “The hard problem is explaining how subjective experience arises from neural computation. The problem is hard because no one knows what a solution might look like or even whether it is a genuine scientific problem in the first place” (Pinker, 2007). This is the question dualism attempted to answer by not answering. Dualism held that the mind was some non-corporeal substance that existed in harmony with, yet apart from, the body. However, without a substance with which to work, even the earliest philosophers such as Descartes himself could not explain how signals would pass from this unsubstantial mind to the materially existing brain. “[The directives from mind to the brain] are not physical; they are not light waves or sound waves or cosmic rays or streams of subatomic particles. No physical energy or mass is an association with them. How, then, do they get to make a difference to what happens in the brain cells they must affect if the mind is to have any control over the body?” (Dennett, 1991: 35).

Dennett uses the analogy of a ghost to explain the obviousness of the fundamental issue here. An existence that is able to defy the laws of gravity, glide effortlessly through walls and therefore render itself undetectable is also a substance that cannot, by definition, affect any changes upon the material things that don’t affect it. Because of this major issue of how the mind works, the dualists adopted the stance that the mind was made of a substance that could not be found and therefore could not be explained – in effect, ignoring the question.

John Searle conducted thought experiments involving robots, a mechanical creature that personifies the fear of future machines acquiring human thought and emotion- as seen on TV and the movies. Searle’s thought experiments use a computer program and equipped with an integrated message system developed to allow a robot to not only speak but interpret the Chinese language. These robots also mimic human movements. They have deceived people making casual contact into believing they were conscious for a small time. The results of these experiments are impressive but certainly provide no evidence that computers, no matter how sophisticated or enhanced, could ever match the consciousness of humans. Dennett denounces this type of experiment because it only encourages an observer to envision a robot fooling someone, little more than a cheap parlor trick “but doesn’t invite them to think in any considerable depth.” He concluded, “the level of detail required to run this program would mean that, even for a single sentence and response, the man operating (it) would likely need to perform billions of operations with the symbol over hundreds of billions of pieces of memory.” (Young, 2009)

Most scientists and philosophers admit that the mind must have some kind of connection with the brain in order to order the quick reactions and subjective, individual responses observable by others. This line of thinking is referred to as materialism because it holds that there is some form of physicality to both mind and brain if they are not one and the same organ and that this connection can and someday will be discovered. Essentially, these theories revolve around the idea that the mind and the brain are a single entity, somehow communicating both action and theater at the same time.

These theories express “the idea that our thoughts, sensations, joys, and aches consist entirely of physiological activity in the tissues of the brain. Consciousness does not reside in an ethereal soul that uses the brain like a PDA; consciousness is the activity of the brain” (Pinker, 2007). Through neurological study, it has been found that “conscious states are caused by lower-level neurobiological processes in the brain and are themselves higher-level features of the brain” (Searle, 1999). The differences experienced in the colors of the rainbow, the smell of the flowers, the daydreams of an idle hour spent under a palm tree on a beach, are all caused and differentiated by the different neurons that fire in different areas of the brain at different rates of activity. “When we see a ball roll down a hill, we appreciate that the rolling is neither the ball itself, nor something apart in some other world – but merely an aspect of the ball’s extension in space-time; it is a description of the ball, over time, seen from the viewpoint of physical laws” (Minsky, 2002).

The human mind has the ability to know what is morally right or wrong almost instantly without the need for assimilating much information. It can make decisions based on the unknown knowledge and can rationalize, justify and reason which traits are only known to that which is conscious. Knowledge has no life; it is based only on cold facts whereas knowing is uniquely biological in nature. There is much puzzlement regarding knowledge and knowing. “Authentic knowing is much different from knowledge. Authentic knowing cannot really be owned as a possession. It can only be touched and experienced. Real knowing is actually an act of the mind. Unlike knowledge which is a product of the mind” (Kruyff, 2006).

The difference between the human brain and a machine of any type is that humans create machines to be used as a tool. Human intellect is extremely intricate and consciousness too mysterious to be duplicated. On the day that a computer can lie or cheat, when it prays to an unknown entity and feels shame or sorrow then, possibly, it can be compared to the human mind. Until then, the only similarity is that both process information but to vastly different extents and by vastly different methods. Biological beings have the potential to experience independent feelings, not manufactured beings. This statement, on its own, may lead to another conversation concerning the necessity of the creation of man but that is another paper.

Works Cited

Clear, Bruce. “Knowing What We Don’t Know That We Know.” (2003). Web.

Dennett, Daniel “Consciousness Explained” (Boston: Little, Brown & Co., 1991), p. 35.

Fraenkel, A.S. “Protein folding, spin glass and computational complexity.” Third Annual DIMACS Workshop on DNA Based Computers. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. Vol. 48, pp. 101-121. University of Pennsylvania. (1999).

Gregory, R. “Qualia.” Oxford Companion to the Mind. (2nd Ed.). Oxford: Oxford University Press. (2004).Web.

Ifrah, G. “Historia Universal de las Cifras.” Madrid: Espasa Calpe. (1997). Web.

Kak, Subhash. “Artificial and Biological Intelligence.” Ubiquity. Vol. 6, I. 42. (2005). Web.

Kruyff, Jan. “Exploring Beyond the Ego Mind: An Essay on Transpersonal Knowing.” The Intuitive-Connections Network Online Magazine. (2006).

Minsky, Marvin. “Minds Are Simply What Brains Do.” Truth Journal. Leadership U. (2002).

Nagel, Thomas. (1986). “The View from Nowhere.” Oxford: Oxford University Press.

Pinker, Steven. “The Mystery of Consciousness.” Time Magazine. (2007). Web.

Rosenblatt, May. “The Mind in the Machine.” Time Magazine. (1982). Web.

Searle, John R. “The Problem of Consciousness.” Available through the University of Southampton (1999). Web.

Young, Scott H. “Consciousness Explained” (2009).

Cite this paper

Select style

Reference

StudyCorgi. (2021, November 17). Machines Cannot Have Independent Feelings. https://studycorgi.com/machines-cannot-have-independent-feelings/

Work Cited

"Machines Cannot Have Independent Feelings." StudyCorgi, 17 Nov. 2021, studycorgi.com/machines-cannot-have-independent-feelings/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2021) 'Machines Cannot Have Independent Feelings'. 17 November.

1. StudyCorgi. "Machines Cannot Have Independent Feelings." November 17, 2021. https://studycorgi.com/machines-cannot-have-independent-feelings/.


Bibliography


StudyCorgi. "Machines Cannot Have Independent Feelings." November 17, 2021. https://studycorgi.com/machines-cannot-have-independent-feelings/.

References

StudyCorgi. 2021. "Machines Cannot Have Independent Feelings." November 17, 2021. https://studycorgi.com/machines-cannot-have-independent-feelings/.

This paper, “Machines Cannot Have Independent Feelings”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.