Artificial Intelligence: Ethical, Social, Legal Issues

Artificial intelligence (AI) is an exciting but very controversial field of informational technology. Some sources say that the further development of this field will be useful for the humankind and will help us to solve many problems, for example, find cures for more diseases, increase the lifespan, give new possibilities in the space travel and so on. At the same time, a lot of other sources claim that such kind of technologies will be harmful to their own creators and that the only good superintelligent machine is the unplugged one. The field of artificial intelligence indeed brings numerous ethical, social, professional and legal issues; but are those so disturbing as some people claim?

Ethical Issues

Let us start with ethical issues. The greatest concern is this regard is the threat to security. Any AI program, regardless of the level of intelligence it demonstrates, remains only a software. Thus, it has all the drawbacks that the software has. First of all, an AI program can be copied – as long as there are people who can do this and the hardware that can store it (Bostrom 2003). Surely, it can be not easy or quick but it can happen. And that is how valuable data can get into the wrong hands. Secondly, machines can make mistakes, which threats the security as well.

As an example, let us imagine that there is an intelligent vision program that scans people’s baggage at the airport (Bostrom & Yudkowsky 2014, p. 317). What if there is the flaw in the algorithms, because of which the program is unable to recognize a bomb if a pistol is put next to it? Such kind of a mistake is possible, and it threatens the security and safety of every person on the board.

Another great concern includes privacy issues. The boundary between the operation of AI programs and the violation of people’s privacy is rather thin. There are a lot of examples when artificial intelligence has been involved in large disputes because of the privacy violations. As Weeks (2012) writes in The Globe and Mail, many businesses use AI programs to collect and store data about their customers: personal data taken from social networks, the location information, shopping patterns, payment habits, and so on. They try to ‘track their customers’ to increase the profit but for a regular user that usually means the violation of his or her privacy (Weeks 2012, para. 3).

Joel Rosenblatt (2014) describes another example of privacy violation for marketing purposes: aiming to advertise their services to a greater number of customers, LinkedIn Corp. downloaded the contacts from their customers’ external e-mails used as usernames on the LinkedIn site and sent several advertising letters to those email addresses. Apart from AI applications in advertising, many other programs are argued to violate users’ privacy. A prime example is the speech recognition technology. The latest smartphones are able to recognize their users by voice, which, as some people claim, steals users’ identity: ‘Your voice doesn’t just give away who you are, but what you’re like and what you’re doing … Your speech is like your fingerprints or your DNA’ (Rutkin 2015, para. 7).

Finally, the development of intelligent technologies can gradually lead to the development of even more intelligent ones, so-called advanced intelligence or superintelligence, and that can happen suddenly (Bostrom 2003). Although it will probably solve or at least help the humankind to solve many problems, including poverty, incurable diseases, global environmental problems, and so on, if it is used for evil purposes, it can exacerbate many other problems.

As a prime example, AI can contribute to wars by creating advanced weaponry – autonomous weapons and military robots (Romportl, Zackova & Kelemen 2014). Some semi-autonomous weapons have already been used by the United States and North Korea, for example (Romportl, Zackova & Kelemen 2014). But even though these weapons did some part of work by themselves (identified the target, for instance), they were not fully autonomous.

With all of this in mind, the question arises: do we actually need artificial intelligence or is it safer to delay its development? The truth is there is also the other side of the debate. While many people claim that AI technologies constitute a threat to security, others prove that they can help to strengthen it as well. In his article for the 3rd International Conference on Cyber Conflict, Enn Tyugu (2011) explains why AI technologies are one of the best options to defend the cyberspace.

Considering the speed of the processes in the cyber defense, as well as the amount of data transferred, without at least the minimum atomization, people are unable to provide an appropriate defense. The situation becomes even more difficult in view of the development and intelligence of modern malware and the frequency and sophistication of cyber-attacks (Tyugu 2011, p. 102). To handle all of this and be able to fight back, people need intelligent defense methods; otherwise, forces are not equal.

Additionally, many ethical issues and risks associated with artificial intelligence can be eliminated or at least minimized by particular precautions. Bostrom and Yudkowsky (2014) write that, in order to be safe, AI technologies should be predictable and transparent to inspections (to make the error detection possible and simple) and robust to manipulations to avoid such situations as the one with a bomb and a pistol described above. Although artificial intelligence is fraught with some new ethical problems and concerns, those can and should be solved as it has already been done with any other technological development.

Social Issues

Apart from ethical issues, the literature also talks about social ones. One of the most important is that artificial intelligence should contribute to peace, harmony, the wellbeing and the development of the population rather than increase the number of wars and battles in the world. AI applications and techniques improve medicine through analyzing complex data, helping with the diagnosis, treatment and even the prediction of the patient outcomes (Ramesh et al. 2004). Moreover, those can be used in almost any field of medicine (Ramesh et al. 2004). Artificial intelligence improves education by automating basic activities, providing tutoring AI programs, finding gaps in courses, and so on (10 Roles For Artificial Intelligence In Education 2015).

It can even contribute to psychology and help with people’s relations. In their article, Hergovich and Olbrich (2002) provide the review of several AI technologies that help to predict conflicts and calculate probabilities of those, determine the course of a dispute, and create peace negotiations for arguments. Finally, AI greatly accelerates the development of science. On the other hand, it can also be used to create advanced weapons. And the higher level of development this field of science reaches, the more impact on our society it has.

Another significant social concern is how people will interact with superintelligent machines if those are discovered. Kizza (2013) writes about a social paradox: people want to create machines that will do their work, but they do not want these machines to become too good in this work (p. 206). If they do achieve better intelligence than humans have, people become afraid of them, which brings many questions about the cooperation between people and intelligent agents created by them.

However, Ramos, Augusto, and Shapiro (2008) state that this problem can be addressed with the help of ambient intelligence, which helps to create technologies that are sensitive and responsive to people’s presence. If superintelligent machines are aware of people’s needs, can adapt to different environments (such as houses, schools, hospitals, offices, sports, and so on) and interact with people, they will have more chances to be accepted by their creators (Ramos, Augusto & Shapiro, 2008).

Finally, the last controversial question discussed in the literature is the one connected to professional and legal aspects. As Elkins (2015) writes in her article in Business Insider, experts predict that by 2025 robots will take over one-third of all people’s professions. Even now, robots work in health care centers assisting doctors and start to learn some white-collar jobs, which earlier seemed to be challenging for them (Elkins 2015). If the predicted outcome happens, and one-third of people’s jobs are performed by intelligent machines, it will not only change our economic system significantly but will also bring many legal issues. First of all, should robots be given the same rights as people have? Should they be protected by the Constitution or provided with full civil rights, which includes the right to reproduce (Elkins 2015)?

Additionally, if an intelligent machine commits a crime, should it be responsible for it, as a human being is responsible for theirs? Although it sounds unrealistic and even funny, something like that has already happened. An intelligent shopping robot developed for purchasing purposes managed to buy a Hungarian passport and Ecstasy pills (Elkins 2015). That time, a robot has not been charged (Elkins 2015). However, if something like this happens again, who is responsible for that? While many people claim that intelligent agents should be given with more or less the same rights and responsibilities that people have, Jack Millner (2015) believes, and I agree, that robots will need new legislations established for them.

To conclude, the development of artificial intelligence is indeed fraught with many controversial questions and problems. AI can give our society many positive things such as the advanced education, medicine, breakthroughs in science, and so on. At the same time, it increases the risk of wars and brings numerous unsolved professional and legal issues. Nevertheless, from my point of view, the greatest problem about artificial intelligence is the humankind, even though it sounds paradoxical. People have always been afraid of everything new, and the possibility of the creation of machines, which are more intelligent than humans, is frightening as well. Besides, many ethical and social problems can be solved, and even legal issues can be regulated. The only really insoluble problem is the one about wars and weaponry. That is highly unlikely that superintelligent technologies will take over the world and destroy it, but people can do it by themselves.

Reference List

10 Roles For Artificial Intelligence In Education 2015.

Bostrom, N & Yudkowsky, E 2014, ‘The ethics of artificial intelligence’, in K Frankish & W Ramsey (eds), The Cambridge Handbook of Artificial Intelligence, Cambridge University Press, Cambridge, pp. 316-334.

Bostrom, N 2003, ‘Ethical Issues in Advanced Artificial Intelligence’, Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, vol. 2, no. 1, pp. 12-17.

Elkins, K 2015, Experts predict robots will take over 30% of our jobs by 2025 — and white-collar jobs aren’t immune.

Hergovich, A & Olbrich, A 2002, ‘What can Artificial Intelligence do for Peace Psychology?’, Review of Psychology, vol. 9, no. 1-2, pp. 3-11.

Kizza, JM 2013, Ethical and Social Issues in the Information Age, 5th edn, Springer, New York, New York.

Millner, J 2015, Should robots have human rights?

Ramesh, AN, Kambhampati, C, Monson, JR & Drew, PJ 2004, ‘Artificial intelligence in medicine’, Annals of the Royal College of Surgeons of England, vol. 86, no. 5, pp. 334-338.

Ramos, C, Augusto, JC & Shapiro, D 2008, ‘Ambient Intelligence – the Next Step for Artificial Intelligence’, IEEE Computer Society, vol. 23, no. 2, pp. 15-18.

Romportl, J, Zackova, E & Kelemen, J 2014, Beyond Artificial Intelligence: The Disappearing Human-Machine Divide, Springer, New York, New York.

Rosenblatt, J 2014, LinkedIn Ordered to Face Customer E-Mail Contacts Lawsuit. Web.

Rutkin, A 2015, Speech recognition AI identifies you by voice wherever you are.

Tyugu, E 2011, Artificial Intelligence in Cyber Defense, Web.

Weeks, C 2012, Dear valued customer, thank you for giving us all of your personal data.

Cite this paper

Select style

Reference

StudyCorgi. (2020, October 3). Artificial Intelligence: Ethical, Social, Legal Issues. https://studycorgi.com/artificial-intelligence-ethical-social-legal-issues/

Work Cited

"Artificial Intelligence: Ethical, Social, Legal Issues." StudyCorgi, 3 Oct. 2020, studycorgi.com/artificial-intelligence-ethical-social-legal-issues/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2020) 'Artificial Intelligence: Ethical, Social, Legal Issues'. 3 October.

1. StudyCorgi. "Artificial Intelligence: Ethical, Social, Legal Issues." October 3, 2020. https://studycorgi.com/artificial-intelligence-ethical-social-legal-issues/.


Bibliography


StudyCorgi. "Artificial Intelligence: Ethical, Social, Legal Issues." October 3, 2020. https://studycorgi.com/artificial-intelligence-ethical-social-legal-issues/.

References

StudyCorgi. 2020. "Artificial Intelligence: Ethical, Social, Legal Issues." October 3, 2020. https://studycorgi.com/artificial-intelligence-ethical-social-legal-issues/.

This paper, “Artificial Intelligence: Ethical, Social, Legal Issues”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.