Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense?

Introduction

Not so long ago, the concept of artificial intelligence (AI) was in the earliest stages of its conception and was only pondered over in science fiction. In fact, the alarmist trends in some parts of contemporary global society owe their existence to the early sci-fi novels addressing the ostensibly detrimental outcomes of AI taking over the humankind and putting everything that is humane to an end (Perkowitz, 2016; Nowak, Lukowicz and Horodecki, 2018). Although there is some grain of truth in the ethical dilemmas that the proponents of the specified viewpoint posit, the idea of the AI representing the force that could corrupt the humankind seems farfetched due to the lack of respective technology and the presence of common sense in global politics.

Background

With the advent of the massively huge and rapidly expanding technological breakthrough observed at the end of the 20th century and at the beginning of the 21st one, the concept of AI no longer seems as distant, with a range of innovative technologies incorporating elements of AI (Tegnark, 2017; Bundy, 2017). For instance, the tools such as facial recognition devices are only a few examples of how AI is used for the benefit of the safety and well-being of society (Mack, 2015).

However, the tendencies in viewing AI as the herald of the demise of the world remain a sad part of the modern reality. However, it is worth admitting that the arguments provided by the supporters of the opposing view are also quite feasible and trustworthy. For example, the opinion of the late Steven Hawking summarizes the worst of fears concerning the possible uprising of the AI in a nutshell: “It would take off on its own, and re-design itself at an ever increasing rate” (Cellan-Jones, 2014).

However, Buchanan (2016) also explains that the idea of the AI superseding the development of the human race does not imply that it will rebel against people since the specified action will require consciousness, not intelligence (Tegmark, 2016). In turn, implanting consciousness in AI represents a much more complicated problem than endowing AI with intelligence, mainly because “scientists can’t really agree on a rigorous definition” of it (Buchanan, 2015; Makridakis, 2017).

Since the notion of consciousness wanders into the depth of religion and spirituality, further scientific research will not yield tangible results, which means that fully autonomous, free-will-yielding machines are still a figment of imagination (Nowak, Lukowicz and Horodecki, 2018). Therefore, the problem of AI as a force that may ultimately destroy the humankind does not seem plausible so far, yet the issue may slowly gain weight and importance as the opportunities for technological progress and update of the existing framework for AI emerge.

Argument

The lack of a clear ethical perspective on a range of issues in the contemporary global discussion is another source of concern. According to Elon Musk, who has been spearheading technological progress and especially the analysis of possibilities for space exploration, has also been quite vocal on the issue of AI development and the threats that it contains. According to Musk, the subject matter represents one of the foundational threats to the survival of the humankind, in general (Kelly, 2017; Makridakis, 2017). Coming from a person that has been at the helm of modern engineering and space science, the described statement sounds especially gloomy (Russell, 2019; Olhede and Wolfe, 2018). However, it is important to keep in mind that the specified statement is merely a position on the issue.

A range of limitations that people have to face in technology are expected to become the safeguarding tools against the ostensible rise of the machines in the nearest future. For example, the fact that the concept of intelligence is yet to be defined and does not represent a single quality but, instead, a combination of characteristics will serve as the gatekeeping mechanism (Sanders and Wood, 2019; Green, 2019). According to Kelly (2017), the absence of a clear definition of the intelligence and the relate abilities prevents from developing the technology that will supersede the human abilities and, thus, will become the superior force in the target environment. Indeed, the phenomenon of intelligence currently seems to be far too multifaceted to be defined as a single entity.

Moreover, the lack of feasible assumptions concerning the threat of AI as the herald of the ending of the humankind seem to be rooted in emotions, rather than logic. Specifically, the irrational fear of being ousted from the face of the Earth by the developing technology seems to have been pushing the idea of malicious AI forward for years (Kelly, 2017). As Kelly clarifies, “If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief — a myth” (Kelly, 2017).

Therefore, the general consensus regarding the problem of AI development and its impact on the humankinds a whole seems to be unwarranted in the current technological and economic setting. Given the multiple issues with the financial resources and their investment, the management of international security, the handling of crucial information, and similar concerns, the world does not seem to be ready for the specific type of technological progress to occur.

Therefore, the general statement concerning the probability of innovative technologies slowly becoming a threat to the security and safety of the humankinds is quite tangible yet also requiring far too great of an effort given the current technological opportunities (Etzioni, 2018; Oliveira, 2017). By introducing the notions such as behavior regulation tools and the rise in the levels of commitment, one will be able to manage the challenges associated with the issues of compliance with set standards.

The fact that the promotion of the AI development will also entail the creation of autonomous weapons should also be discussed as one of the cornerstone problems of supporting AI developed for military purposes. The introduction of the military agenda into the context of AI, in general, seems to be a very ineffective tool of managing some of the current tensions within the foreign policy environment. However, controlling the decisions made in the context of the environments of other countries does not seem to be a viable idea (Carriço, 2018; Nagpal and Prakash, 2020). Therefore, it is important to remain responsible toward the management of military resources and rely on the principles of common sense as the main method of balancing the decision-making processes.

In addition, it is important to ensure that, even on the battlefield, AI-based tools are used not for attacking the opponent but for saving civilians and protecting their lives. For instance, innovative technology such as drones and the related devices could be utilized to identify safe and unsafe locations, thus increasing the efficacy of the transportation of civilians within military environments (Walsh, 2015; Müller, 2016). Overall, the focus on the peaceful use of the said technology should become the essential emphasis in order to contain the possible abuse of AI in the military context.

The importance of reconsidering the present framework for managing and producing AI-based tools for military purposes might seem a rather irrelevant goal given the current comparatively peaceful environment in which most countries find themselves. However, it is worth noting that the current political context can be characterized by unprecedentedly increased levels of international tension (Złotowski, Yogeeswaran and Bartneck, 2017).

Therefore, the creation of the devices and strategies that could help to regulate the production of military AI should be deemed as a necessity. With the specified restriction, the threat of I ever getting out of hand and gaining a power of its own will not be possible.

Despite the potential that AI currently holds as a technological innovation, the probability of it gaining an agency of its own and becoming a threat to the development and existence of the humankind is implausible. For the AI to develop the required extent of a threat, it will be necessary to grant it the consciousness that the humankind possesses, which is impossible given the current rate of technological development (Puaschunder, 2019).

Moreover, the specified step does not seem manageable even in the observable future due to the complications associated with the definition of intelligence and the opponents thereof. Nevertheless, it is important to ensure that the current process of AI development remains focused on the exploration of its opportunities in the civil context, and not in the military one. Thus, the threat of the AI being abused and, thus, leading to the demise of the humankind in an oblique manner will be alleviated to a significant extent. For this reason, AI development should be pursued as the concept that holds a tremendous potential and offers additional options for future technological breakthrough and the improvement in the quality of people’s lives.

Conclusion

The limitations of the current AI research need to be mentioned as the main argument in regard to the impossibility of AI getting out of control and gaining consciousness of its own. Even bypassing the notion that gaining a semblance of agency and consciousness would require defining the subject matter and developing the autonomy of the human brain, which has not been fully explored as a concept yet, one will still have to admit that the humankind does not have the technological prowess needed to create a fully autonomous AI. Therefore, the threat of the AI becoming the reason for the demise of the humankind does not seem to be a plausible concept currently.

Even though science seems to have advanced impressively due to the global cooperation and the promotion of knowledge sharing, the existing framework for managing scientific research does not seem to allow studying AI at the required rate in order to imbue it with sufficient power to put an end to the human race. Therefore, the idea of the AI ending the humankind and leading to a global catastrophe does not represent modern reality accurately.

Reference List

Buchanan, D.W. (2015) ‘No, the robots are not going to rise up and kill you’, The Washington Post. Web.

Bundy, A. (2017) ‘Smart machines are not a threat to humanity.’ Communications of the ACM, 60(2), pp. 40-42.

Carriço, G. (2018). ‘The EU and artificial intelligence: A human-centred perspective.’ European View, 17(1), pp. 29-36.

Cellan-Jones, R. (2014) ‘Stephen Hawking warns artificial intelligence could end mankind’, BBC News. Web.

Etzioni, O. (2018) ‘Point: Should AI technology be regulated? yes, and here’s how.’ Communications of the ACM, 61(12), pp. 30-32.

Gibbs, S. (2014) “Elon Musk: Artificial intelligence is our biggest existential threat”, The Guardian. Web.

Green, B. P. (2019) ‘Self-preservation should be humankind’s first ethical priority and therefore rapid space settlement is necessary.’ Futures, 110, pp. 35-37.

Kelly, K. (2017) “The Myth of Superhuman AI.” Wired. Web.

Mack, E. (2015) ‘Bill Gates says you should worry about artificial intelligence’, Forbes. Web.

Makridakis, S. (2017) ‘The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms.’ Futures, 90, pp. 46-60.

Makridakis, S. (2017) ‘The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms.’ Futures, 90, pp. 46-60.

Müller, V. C. (Ed.). (2016) Risks of artificial intelligence. Boca Raton, FL: CRC Press.

Nagpal, K. and Prakash, S. (2020) ‘Artificial Intelligence and Medical Ethics: Unresolved Issues.’ Austin J Surg, 7(1), p. 1239.

Nowak, A., Lukowicz, P. and Horodecki, P. (2018) ‘Assessing Artificial Intelligence for Humanity: Will AI be the Our Biggest Ever Advance? or the Biggest Threat [Opinion].’ IEEE Technology and Society Magazine, 37(4), pp. 26-34.

Nowak, A., Lukowicz, P., and Horodecki, P. (2018) ‘Assessing Artificial Intelligence for Humanity: Will AI be the Our Biggest Ever Advance? or the Biggest Threat [Opinion].’ IEEE Technology and Society Magazine, 37(4), pp. 26-34.

Olhede, S. and Wolfe, P. (2018) ‘The AI spring of 2018.’ Significance, 15(3), pp. 6-7.

Oliveira, E. (2017) ‘Beneficial AI: the next battlefield.’ Journal of Innovation Management, 5(4), pp. 6-17.

Perkowitz, Sidney. (2016) ‘Removing Humans from the AI Loop — Should We Panic?’, Los Angeles Review of Books. Web.

Puaschunder, J. M. (2019) ‘Artificial Intelligence evolution: On the virtue of killing in the artificial age.’ Scientia Moralitas-International Journal of Multidisciplinary Research, 4(1), pp. 51-72.

Russell, S. (2019) ‘It’s not too soon to be wary of AI: We need to act now to protect humanity from future superintelligent machines.’ IEEE Spectrum, 56(10), pp. 46-51.

Sanders, N. R. and Wood, J. D. (2019) The Humachine: Humankind, Machines, and the Future of Enterprise. New York, NY: Routledge.

Tegmark, M. (2016). “Benefits & Risks of Artificial Intelligence.”, Future of Life. Web.

Tegmark, Max (2017). Life 3.0 : being human in the age of artificial intelligence (First ed.). New York: Knopf.

Walsh, T. (2015) ‘Autonomous Weapons: an Open Letter from AI & Robotics Researchers’, Future Life Institute. Web.

Złotowski, J., Yogeeswaran, K. and Bartneck, C. (2017) ‘Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources.’ International Journal of Human-Computer Studies, 100, pp. 48-54.

Cite this paper

Select style

Reference

StudyCorgi. (2022, March 6). Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense? https://studycorgi.com/could-artificial-intelligence-end-mankind-or-is-it-all-alarmist-nonsense/

Work Cited

"Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense?" StudyCorgi, 6 Mar. 2022, studycorgi.com/could-artificial-intelligence-end-mankind-or-is-it-all-alarmist-nonsense/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2022) 'Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense'. 6 March.

1. StudyCorgi. "Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense?" March 6, 2022. https://studycorgi.com/could-artificial-intelligence-end-mankind-or-is-it-all-alarmist-nonsense/.


Bibliography


StudyCorgi. "Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense?" March 6, 2022. https://studycorgi.com/could-artificial-intelligence-end-mankind-or-is-it-all-alarmist-nonsense/.

References

StudyCorgi. 2022. "Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense?" March 6, 2022. https://studycorgi.com/could-artificial-intelligence-end-mankind-or-is-it-all-alarmist-nonsense/.

This paper, “Could Artificial Intelligence ‘End Mankind’ or Is It All Alarmist Nonsense?”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.