Are Autonomous Weapons an Ethical Technology?

Introduction

At the end of the twentieth century, it was understood that a drive for technological advancement would inevitably lead to striking societal transformations. Out of a panoply of applications of ethics for the improvement of social outcomes, the assessment of emerging technological solutions is, arguably, the most important one. Therefore, researchers, scientists, and engineers should not lose sight of applied ethics when thinking about the social context of innovation.

To this end, technology has to be treated as a “system of socio-technical systems” (Hoven, 2014, p. 6). By doing so, it is easier to understand why technology ethics often enter public discourse. The aim of this paper is to analyze offensive autonomous weapons by using Palm and Hanson’s criteria for ethical technology assessment. It will be argued that the development of military robots with lethal capabilities is associated with a wide range of ethical issues the most important of which fall in the range of control, influence, and power.

Offensive Autonomous Weapons

Numerous governments around the world are currently sponsoring artificial intelligence (AI) and robotics research institutions that work on the development of offensive autonomous weapons (Roff, 2014). Fortunately, the global scientific community recognizes that the development and future deployment of lethal robots are associated with a wide range of ethical challenges that make the existence of the technology morally impermissible. A fictional video depicting ‘slaughterbots’ has been recently released by the Campaign to Stop Killer Robots in an attempt to inform the public about the dangers of military robotics and elicit its support for the ban of autonomous weapons (“Disturbing video,” 2017).

Ethics

The growing concerns about the development of automated weapon systems are foregrounded by the grim reality of military drones that have been deployed enumerable times against legitimate and illegitimate military targets. Taking into consideration the rapid rate of scientific advancement in the area of robotics, it would not be a stretch to argue that the development of offensive autonomous weapons will radically transform the power balance in the near future.

The emerging technology is associated with the second item on Palm and Hansson’s list of ethical criteria that should be applied to new technologies—control, influence, and power. The scholars argue that at the stage of development, new technologies that have ethical implications have to be evaluated against several criteria (Palm & Hansson, 2006). Upon carefully considering the nature of the scientific advancement, it is clear that its future adoption will shift global control, influence, and power patterns, which can result in unpredictable consequences for the global community.

The push for increased autonomy in warfare is associated with a host of practical consequences that are not acceptable in neither ethical nor legal terms. The deployment of lethal autonomous robots cannot be squared with the modern international conventions pertaining to just war. According to Roff (2014), these conventions, which are collectively referred to as the Laws of War (LOW), hinge on a set of ethical standards that will be inevitably transgressed by intelligent machines. The first issue that has to be considered is attack decisions that, if performed by robots, might indiscriminately affect combatants and non-combatants alike.

Another ethical consideration, which makes the criterium of control, influence, and power the most prominent, is the lowering of barriers to war. If autonomous robotic systems are equipped with lethal capabilities, the low cost of their deployment on a battlefield will make it easier for nations to engage in military actions (Hellstrom, 2013). Low risk of casualties will make the adoption of aggressive foreign policies expedient for some states, thereby provoking leaders of other nations.

The issue will reach its extremes if the risk is accepted only by one side. In the case of unilateral deployment of new military tactics backed by automated systems with lethal capabilities, the costs of war can be truly horrific (Altmann, Asaro, Sharkey, & Sparrow, 2013). Taken to the extreme, the total disruption of barriers to war can result in a global hegemony of one nation. Not only such a scenario is associated with the disruption of the current patterns of control, influence, and power but it is also morally unpalatable.

In the age of automated battlefield weapons, their wide proliferation might provide rogue states and non-state actors with unprecedented power. There is no doubt that once the technology is adopted, numerous nations will work hard to replicate it, which will make the world more dangerous. The severity of the problem is underscored by a study pointing to the fact that more than fifty nations possessed military robots in 2012 (as cited in Sandler, 2014).

Furthermore, the scale of the threat will be magnified beyond reasonable limits if outer space is opened for militarization (Sandler, 2014). Taking into consideration a rapid rate of scientific advancements in the area of space exploration, this scenario ceases to be improbable. Therefore, the ethical concerns associated with the development of intelligent machines for military use are extremely pressing and deserve careful consideration by scientists and ethicists.

Future Scenario

The deployment of lethal combat robots is a frightening scenario that might turn grim science fiction into reality. Existing military technologies already test the limits of people’s moral faculties by presenting philosophers with ethical conundrums of varying complexity. The most prominent dilemma that is confronted by modern ethicists is the implications of remote fighting that have become especially prominent with the development of military drones (Noorman & Johnson, 2014). It follows that the emergence of autonomous lethal robots with advanced surveillance systems will present even more complex ethical problems.

The adoption of intelligent military weapon systems will inevitably transfer the warfare in the sphere of long-range conflicts. A corollary is that the consequences of a decision to kill will be minimized, while the consequences of a lethal action will be amplified to the umpteenth degree. The tempo of military actions will skyrocket, thereby allowing conflicting parties to destroy each other’s personnel with unprecedented efficiency and speed (Sandler, 2014).

If international policies fail to adjust to the changing landscape of the new technological reality, the global decision-making loop will become narrower, which will push nations toward secrecy. It follows that the normative terrain of relationships between nations will be shaped by calculations of expedience conducted by a few powerful actors. Given the moral distance associated with launching fully-automated military operations, this development will endanger crucial foundations of global peace. Some states will try to use their newly-acquired military capabilities and engage in unethical behaviors, thereby approximating the eruption of a global conflict that might eradicate humanity.

The pessimistic outlook on the future of technology is based on the careful consideration of human history and the state of modern warfare. The road forward necessitates the development of ethically-acceptable robots whose operating systems allow sophisticated moral reasoning. Govindarajulu and Bringsjord (2015) argue that future robotic designs must include logical systems that are positioned along the ethical dimension. By introducing ethical layers into the reasoning structures of robots, it is possible to create autonomous weapon systems that behave in accordance with strictly delineated moral requirements. However, there is no way to guarantee that the ethics modules of military robots will not be disconnected by rogue hackers.

Another approach to the mitigation of the ethical issue is to embed the pacifistic philosophical stance into all robot designs. That is, all autonomous robot systems should be prevented from engaging in both offensive and defensive military actions. This position is masterfully presented by Tonkens in his essay on ethics-based robotics. The scholar argues that “either autonomous robots should be programmed to be pacifists or that autonomous robots should not be developed at all” (as cited in Altmann et al., 2013, p. 75).

It follows that already existing lethal capacities of intelligent machines should be eliminated to prevent the development of the future scenario outlined above. Taking into consideration technical challenges associated with the development of autonomous weapon systems capable of moral reasoning, it can be argued that the solution proposed by Tonkens deserves merit. However, the feasibility of its practical implication is threatened by the power ambitions of global political actors.

References

Altmann, J., Asaro, P., Sharkey, N., & Sparrow, R. (2013). Armed military robots: Editorial. Ethics and Information Technology, 15(2), 73-76.

The disturbing video depicts near-future ubiquitous lethal autonomous weapons. (2017). Web.

Govindarajulu, N. S., & Bringsjord, S. (2015). Ethical regulation of robots must be embedded in their operating systems. In R. Trappl (Ed.), A construction manual for robots’ ethical systems (pp. 85-99). New York, NY: Springer.

Hellstrom, T. (2013). On the moral responsibility of military robots. Ethics and Information Technology, 15(2), 99-107.

Hoven, J. (2014). Responsible innovation: A new look at technology and ethics. In J. Hoven, N. Doorn, T. Swierstra, B. J. Koops, & H. Romijn (Eds.), Responsible innovation 1: Innovation solutions for global issues (pp. 3-15). New York, NY: Springer.

Noorman, M., & Johnson, D. G. (2014). Negotiating autonomy and responsibility in military robotics. Ethics and Information Technology, 16(1), 51-62.

Palm, E., & Hansson, S. H. (2006). The case for ethical technology assessment (eTA). Technological Forecasting & Social Change, 73, 543-558.

Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of Military Ethics, 13(3), 211-227.

Sandler, R. L. (2014). Ethics and emerging technologies. Hampshire, England: Palgrave Macmillan.

Cite this paper

Select style

Reference

StudyCorgi. (2021, April 13). Are Autonomous Weapons an Ethical Technology? https://studycorgi.com/are-autonomous-weapons-an-ethical-technology/

Work Cited

"Are Autonomous Weapons an Ethical Technology?" StudyCorgi, 13 Apr. 2021, studycorgi.com/are-autonomous-weapons-an-ethical-technology/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2021) 'Are Autonomous Weapons an Ethical Technology'. 13 April.

1. StudyCorgi. "Are Autonomous Weapons an Ethical Technology?" April 13, 2021. https://studycorgi.com/are-autonomous-weapons-an-ethical-technology/.


Bibliography


StudyCorgi. "Are Autonomous Weapons an Ethical Technology?" April 13, 2021. https://studycorgi.com/are-autonomous-weapons-an-ethical-technology/.

References

StudyCorgi. 2021. "Are Autonomous Weapons an Ethical Technology?" April 13, 2021. https://studycorgi.com/are-autonomous-weapons-an-ethical-technology/.

This paper, “Are Autonomous Weapons an Ethical Technology?”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.