Introduction
Artificial Intelligence (AI) is rapidly evolving, which offers numerous potential benefits for humankind. However, machine learning imposes additional risks of adverse consequences for society. According to Jobin et al. (2019), AI refers to “the theory and development of computer systems able to perform tasks normally requiring human intelligence” (p. 389). Such technology requires processing and analyzing significant amounts of data as well as involves the application of autonomous systems. Hence, ethical principles should be considered in such operations, and guidelines need to be offered for companies implementing AI technologies. This paper aims to analyze some ethical issues attributed to the employment of Artificial Intelligence and provide recommendations to resolve moral dilemmas of businesses and organizations posed by technological disruption.
Situation
As AI advances and becomes more sophisticated, it opens more possibilities for misuse. Identifiable data can be used for profit or malicious purposes, and systems remain under the constant threat of hacking. These facts raise public awareness about the risks that new technology and AI implementation pose for citizens. Manipulation of data requires consideration of ethical concerns and efficient design for their solution. In this regard, looking at the cases of ethics violations in AI can help identify issues and develop recommendations for businesses and companies to prevent misuse.
One of the aspects to consider is the application of AI technologies in the healthcare industry. For instance, Bird et al. (2020) mention the case of “the precise surgical robotic assistant ‘the da Vinci’” implemented to minimize surgical recovery (p. 54). Multiple injuries and sometimes deaths were reported as a result of the tool’s use for complex surgeries. Most of the adverse consequences were caused by device malfunctioning and user error. Robot-assisted procedures require adequate training of the medical workforce to minimize risks of harmful effects on patients, which is not always the case. Moreover, legal liability is not established since the tool is still widely used despite numerous lawsuits against it (Bird et al., 2020). In this regard, ethical issues regarding the autonomy, control, and safety of the robotic assistant need to be addressed.
Another case involves AI application in neurotechnology and considers challenges that it might pose for humankind. According to Yuste et al. (2017), a brain-computer interface (BCI) is a system aiming to exchange information between the brain and an electronic device, like a computer. BCI is used to detect brain signals indicating the intents of an individual and translate them into device commands to accomplish the individual’s intentions. Current technology focuses on therapeutic purposes, such as enabling people with spinal cord injuries to complete simple tasks. However, BCI’s future application knows no limits, identifying ethical concerns that need to be addressed now. For instance, as Yuste et al. (2017) report, a man who used a brain stimulator to treat his depression began to experience a blurred sense of identity after seven years. According to Yuste et al. (2017), he “began to wonder whether the way he was interacting with others … was due to the device, his depression or whether it reflected something deeper about himself” (Agency and identity section, para. 1). Therefore, the issue of possibly altering the sense of identity because of the AI technologies’ use must be investigated.
Finally, another moral concern is involved in personal data collection and usage. For instance, Bird et al. (2020) mention the Facebook – Cambridge Analytica data breach when personal data of more than 50 million Facebook users was used without their contest. Identifiable information of Facebook accounts was collected and utilized for political purposes in the U.S. Presidential elections in 2016, and the Vote Leave Brexit campaign. The scandal resulted in the platform users’ significant concern about the misuse of their personal data and caused a movement #DeleteFacebook, endangering the platform’s reputation. As Bird et al. (2020) emphasize, the Facebook – Cambridge Analytica data breach indicates that “AI can be used to target and manipulate individual voters” (p. 17). Furthermore, this incident undermined trust for social media platforms, which are nowadays viewed with caution and not generally trustworthy. Democracy depends on fairness and freedom of the election process, but it is evident that AI can be used for malicious purposes that can threaten political outcomes. Hence, analyzing the misuse of AI opportunities is crucial for preventing future data breaches and improving privacy policies.
Analysis
The cases and ethical concerns described above require an analysis that will allow for identifying ways for resolving moral issues associated with AI. In this regard, the effects of ethics’ problems with advanced innovation on business, governments, and society need to be considered. Risk assessment for emerging technology is essential to be prepared for potential adverse effects, especially for influential highly-technological companies that set standards for all businesses.
Different frameworks can be applied to analyze the ethical issues attributed to the deployment of AI. In particular, Bird et al. (2020) suggest a model that differentiates impacts on society, human psychology, the financial system, the legal system, the environment, and trust. Based on this scheme, it can be stated that in the case of the surgical robotic assistant, ethical concerns involve impact on the legal liability, tort law, labor market, and control over the tool. Since identifying whether damage during surgery was a machine’s or an operator’s fault can be difficult, the question arises about who should be responsible for the AI’s actions.
The usage of advanced BCI in neurotechnology can impact human psychology and relationships, as well as involve bias and control an individual’s intentions. According to Yuste et al. (2017), a scenario is possible when a paralyzed man participating in a clinical trial of a BMI accidentally hurts someone from the experimental team. While the man believes the accident is a result of the device malfunction, he also wonders whether his frustration with the team could have caused it. This theory bases on the case of the man who reported having an altered sense of identity due to BMI implementation.
Finally, the Facebook – Cambridge Analytica data breach has an impact on privacy and human rights, the legal system, democracy, human psychology, transparency, and trust. Besides, along with Facebook’s reputation, its financial situation was affected, lowering the company’s value. Data privacy ethical violation showed a drop in the corporation’s stock returns (Bird et al., 2020). Moreover, it abused democratic principles and broke customer trust. As a result, the incident resulted in legal repercussions for Facebook and raised public awareness of potential risks associated with personal data collection and usage.
Recommendations
Based on the discussion and analysis of the facts, recommendations need to be made to improve the implementation of AI and help businesses and governments resolve ethical issues. According to Jobin et al. (2019), one of the crucial debates is “whether AI should be held accountable in a human-like manner or whether humans … are ultimately responsible for technological artifacts” (p. 389). In this regard, the distinction between beneficence and non-maleficence should be determined. As Floridi et al. (2018) recommend, a human choice should be established as central, promoting people’s autonomy and restricting the power of machines. The autonomy of AI should be ensured to be reversible, especially in the fields that involve health- or life-threatening risks, such as healthcare and surgery.
Other recommendations include estimating and addressing potential threats prior to their occurrence. For instance, the effects of an advanced BCI can be predicted based on the data of patients who undergo therapy and report anomalies. Finally, data privacy and non-malevolent use of identifiable information collected by AI must be guaranteed. As Jobin et al. (2019) state, three aspects of recommendations can be listed: “technical solutions, such as differential privacy, … data minimization and access control; calls for more research and awareness; regulatory approaches” (p. 389). Overall, it is crucial to set values for AI to uphold and ensure that humans have the upper hand in the decision-making process. Such an approach can be critical in case of technological or setup errors.
To summarize, moral issues associated with the application of Artificial Intelligence are complex and require analysis to help businesses and organizations resolve moral dilemmas. In this paper, three cases in the healthcare industry, neurotechnology, and personal data collection and usage were investigated. Based on the analysis, the key recommendations were made, such as maintaining human control over AI processes, addressing potential threats beforehand, and implementing regulatory approaches, including legal liability.
References
Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E., & Winfield, A. (2020). The ethics of artificial intelligence: Issues and initiatives.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
Yuste, R., Goering, S., Arcas, B. A., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., Friesen, P., Gallant, J., Huggins, J. E., Illes, J., Kellmeyer, P., Klein, E., Marblestone, A., Mitchell, C., Parens, E., Pham, M., Rubel, A., Sadato, N.,… & Wolpaw, J. (2017). Four ethical priorities for neurotechnologies and AI. Nature News, 551(7679).