Artificial Technology’s Ethical, Social, Legal Issues

Introduction

Artificial intelligence (AI) refers to the creation of intelligent machines that can perform actions that were traditionally reserved for human beings. Another definition describes AI as the study and design of intelligent objects or agents that can follow human thoughts, and as a result, perform acts that are commonly performed by humans (Acton, 2013). The intelligent behaviour exhibited by such machines has been a source of heated debates for many.

Research in this area is specialised and technical because it involves complex processes and procedures. The main areas that AI research focuses on include learning, planning, knowledge, reasoning, and perception (Acton, 2013).

AI has faced harsh criticisms because of the ethical, social, and legal issues that emerge with regard to its adoption. Experts in different fields have questioned the rationale of creating artificial beings that possess human capabilities, and can thus replace human beings in their various roles. Examples of the application of artificial technology in contemporary society include robots, virtual personal assistants, smart cars, security surveillance, and smart home devices. Simple devices such as cars, smartphones, and automatic teller machines (ATM) are examples of daily application of artificial intelligence (Bostrom 2014).

Ethical issues

Roboethics

One of the most common ethical issues in artificial technology is roboethics. Roboethics refers to the morality of designing, manufacturing, and using artificial beings in improving as well as harming human life (Acton, 2013). Society has certain moral obligations toward artificially intelligent beings in the same way animals and human beings possess rights that protect their welfare (Bostrom 2014).

In that regard, experts have not agreed regarding the scope of the rights that they beings should enjoy. In addition, they have not yet agreed as to whether laws to protect their rights should be enacted. Another ethical issue regarding AI is threat to human privacy (Bostrom 2014). This could result from the use of an AI program that understands and decodes all languages without requiring any human input. In that regard, the program could be used to infringe on people’s privacy by listening to their phone conversations and reading their emails, as well as reporting back to their owners regarding the messages passed through verbal and print conversation. One of the major threats of such a program would be its use by governments to spy on citizens and attack other nations that disagree with their policies.

AI is a threat to human dignity because it can be used to replace people in the performance of daily tasks (Acton, 2013). Therapists, police officers, teachers, nurses, customer representatives, and salespeople work in areas that require the application of human qualities such as kindness, care, respect, and great understanding. Human interactions require the involvement of authentic feelings of empathy and compassion. The danger of replacing humans in these positions with intelligent beings is that machines will not be able to show kindness, compassion, respect, and care (Bench-Capon & Dunne 2007). Therefore, people will feel alienated, disrespected, uncared for, and disappointed.

It is impossible for machines to express feelings that are critical for deep connections. A human being is made up of three important components namely body, mind, and soul. Artificial technology eliminates the need for the “soul” component that is useful for human roles that AI cannot do or mimic (Bench-Capon & Dunne 2007). Frankish & Ramsey (2014) have supported the replacement of humans by machines in certain areas. For example, they argue that the prevalence of corruption and injustice would be nonexistent in the criminal justice system because intelligent beings would be fair o everyone. They would promote the ethical principles of fairness and justice without favouring anyone.

Use of AI as weaponry

It would be dangerous to use artificial technology for roles such as military combat because of the uncertainty that is linked to the effectiveness of their autonomous functions (Frankish & Ramsey 2014). Experts have urged that great care should be taken when using military robots in combat because their abilities to make autonomous decisions have severe implications. According to Bench-Capon & Dune (2007), artificial beings can make decisions more effectively than human beings due to lack interference from emotions, prejudices, and cognitive biases. Some experts have warned against using AI in military combat because it could spark a takeover of mankind by robots (Bench-Capon & Dunne 2007).

On the other hand, AI weapons are more dangerous than weapons that can be controlled by humans. Worries and doubts multiply as many countries embrace the idea of manufacturing AI weaponry.

Potential dangers of AI weaponry include robots going rogue and artificial beings developing minds of their own and using them to destroy human existence (Warwick 2013). Destruction of human existence would amount to contravention of the ethical principle of non-maleficence that advocates for doing no harm against human beings. It would be dangerous for humans to lose the responsibility of securing their borders to artificial beings. History shows that when two civilizations crash, the survivor is the one with higher intelligence. A combat between humans and artificial beings would lead to termination of human existence.

Machine ethics

One of the greatest challenges in AI research is how to build robots that have ethical standards similar to those followed by humans (Warwick 2013). The issue of machine ethics has increased doubts as to whether the adoption of AI is a good choice. One of the most prominent questions posed with regard to manufacture of ethical robots is how engineers will create robots so that they can react appropriately when in a dilemma involving two bad choices (Frankish & Ramsey 2014).

The pace of advancements in AI is currently so high that these issues might not be adequately addressed before artificial beings take over major roles in sectors such as healthcare and security. Full acceptance of AI will primarily depend on whether artificial beings can act in ways that conform to acceptable social norms and that enhance the safety of humans (Warwick 2013). The ability of artificial intelligence to reason successfully in ethical situations is an important matter that needs to be evaluated thoroughly. It is difficult to determine how much and what kind of intelligence these beings require in order to act ethically (Robinson 2015).

In an experiment involving AI, a robot was programmed to remind people to take their medicine at specific periods. One of the questions that arose from the experiment was how the robot should react in case the patient declined to take medicine. A patient’s refusal to take medicine is harmful to their health and well being (Frankish & Ramsey 2014). On the other hand, coercing the patient to take it would amount to infringement of the ethical principle of respect for personal autonomy. Engineers have argued that one of the strategies that can be used to address this challenge is to use a rule-based approach to create robots.

This means that robots will be programmed with rules that direct them to act in specific ways in different situations (Warwick 2013). Humans would be able to discern why an artificial being acts in a certain way. This is critical especially for the military because it would be important to know what a robot would do in certain situations (Robinson 2015).

Many people have rejected the use of autonomous, militarised robots in combat because they pose several risks to human life. In addition, they would contravene the ethical principle of non-maleficence. Lack of understanding, intuition, and judgment would affect the decision-making abilities of AI and as a result endanger the lives of human combatants. According to Warwick (2013), robots can perform better than human beings in many situations because their programming could never allow them to contravene rules of combat that human beings disobey in many instances. Many experts have advocated for the use of logic in the creation of ethical machines.

They argue that logic is what human beings use in many occasions to make ethical decisions. However, this is difficult because developing instructions that follow logical steps that apply in ethical decision making is very challenging (Frankish & Ramsey 2014). Many scientists have expressed fears that it could be too late to instil ethics into machines because they are now beyond human control. Intelligence originates from highly interconnected information processing systems because replication of information initiates a process of evolution. In that regard, the internet is providing a fertile ground for artificial beings to develop super intelligence.

AI has led to creation of machines that duplicate, vary, and merge huge amounts of information within short periods of time. These machines possess processing capacities that surpass those of human beings. The existence of these capabilities means that a process of evolution has been initiated (Warwick 2013). This reality is augmented by the argument that AI is evolving not to help people improve their lives but for its own benefit. In that regard, scientists warn that the evolution will result from the huge amounts of data, new software and hardware incorporated into the technology cloud on a daily basis (Warwick 2013).

In the last few decades, technology has advanced rapidly and led to creation of more intelligent machines. In many economic sectors, human labour has been replaced by automated machines that have improved efficiency of business processes. It is clear that this advancement is an indication that in future, machines could evolve to attain super intelligence that will surpass the intelligence of humans. The Machine Intelligence Research Institute has urged scientists to aim toward creating machines that are friendly and humane in order to avert a possible crisis involving takeover of mankind by artificial beings (Warwick 2013). Creating robots that are self-reliant and autonomous could be a threat to human existence. An example is the development of computer viruses than can evade elimination from operating systems.

Social issues

Loss of jobs

The most critical social issue in AI is elimination of jobs as human labour is replaced by machines (Warwick 2013). In the last two decades, advanced technologies have led to automation of many processes thus leading to loss of jobs that were once performed by humans only (Lohr 2011). The ethical principle of benevolence is defined as the disposition to show kindness, good will, and generosity toward other people by doing what is good.

This principle is contravened by AI because taking jobs away from human beings is unkind and does not do them any good. As the world experiences advances in AI, more people lose their jobs due to replacement by machines. The service industry has been the most affected. For example, many factories and farms have automated most of their operations thus requiring less human involvement than before (Warwick 2013). Technological unemployment is a term that was used by an economist known as John Keynes to refer to the ability of technology to take more jobs from humans than the economy was able to create new ones (Lohr 2011).

The possibility of higher rates of unemployment is validated by the pace of automation being experienced. Technological advancement is facilitating the creation of computers, software, robots, and artificial beings that posses capabilities to perform many activities (Lin et al. 2011).

In contemporary society, automation has a wide scope because it has moved from farms and factories to sales, marketing, banking, and call centres. It has greatly affected the services sector that is the major source of employment (Warwick 2013). The transportation sector will soon be affected by the looming introduction of autonomous cars. Traditionally, it was thought that technology could not perform complex tasks such as driving cars. However, this belief was dropped after self-driving cars were manufactured. Artificial intelligence will continue to evolve and replace humans in key employment areas (Lohr 2011).

Some experts have argued that complete replacement of humans by AI will not happen because it performs poorly in areas such as creativity and intuition (Lin et al. 2011). Artificial beings perform well only in areas that require basic human thinking capabilities. The accuracy of this argument is uncertain because recent advancements in AI research do not reveal the limits of technological evolution or the scope of activities that AI can perform.

Replacement of human judgment by machine judgment

Another social issue related with artificial intelligence is the replacement of human judgment by machine judgment (Lin et al. 2011). Some experts have argued that even though AI could eliminate the need for humans in many jobs, it is impossible for them to mimic or possess three human qualities namely experience, values, and judgment (Payr 2011). AI is has a very promising future, hence a proposal to replace human services in areas such as education and social care with algorithmic regulation (Muller 2016).

This means that instead of relying on humans to make decisions based on limited information, artificial beings will be used because their capabilities will ensure more accurate projections of the outcomes of plans and strategies.

This proposal has been rejected in many fields because technology should not make decisions for humans but help them make decisions (Vardi 2012). The delivery of services such as social care and health care depend on the application of human qualities such as understanding, judgment, empathy, and compassion (Gunkel 2012). It is impossible for artificial beings to incorporate these aspects into their interactions with humans. Therefore, allowing them to take over the roles of humans in the services sector is a mistake. Human decision-making is preferred to machine decision-making because it is informed by values and experience (Muller 2016).

The capabilities of AI have been compared to human intelligence because of variations in the speed of processing information. However, such comparisons have ignored factors involves in human processing capabilities such as free will, ability to set objectives, and ability to make decisions based on values (Payr 2011). For AI to be fully effective and acceptable, it should be able to make decisions that are consistent with human behaviour and values (Muller 2016).

Artificial intelligence can only make decisions based on available data. However, that is very different from human judgment because it does incorporate any values into the decision-making process. Decisions rely on values that originate from life experiences. In that regard, artificial beings do not experience life, and therefore, cannot develop values that are critical in making decisions (Gunkel 2012). In addition, their ability to set goals or objectives is based on programming. Therefore that limits them with regard to the roles they can play. The process of developing values requires free will that AI cannot possess. In that regard, AI will not be able to make conscious decisions like humans.

Changes in how humans understand themselves

The embracement of AI will also alter human’s understanding of themselves due to the occurrence of an imaginary event referred to by scientists as technological singularity (Gunkel 2012). This event is projected to occur when artificial general intelligence surpasses the intelligence of human beings and when artificial beings possess the ability for self-improvement and understanding. Humans gain satisfaction and fulfilment by finding purpose in their lives. However, being replaced by AI in all areas would mean that they would have nothing to do other than be at the mercy of artificial beings (Goodman 2015). In that regard, humans would become obsolete because they would stop using their cognitive abilities to solve problems because AI would do that for them.

Proponents of AI argue that robots artificial beings will create more jobs rather than destroy them. A study conducted by Metra Martech research firm concluded that robots are responsible for creating more than 3 million jobs (Muller 2016). These jobs were identified in areas that included solar and wind power, advanced battery manufacturing, food, and consumer electronics. The report also stated that increased use of robots will lead to creation of more jobs.

The authors argue that robots work in areas that are unsafe for human and do work that humans cannot carry out. In addition, they do tasks that would be uneconomically viable in high wage economies. Robots have saved industries such as shipbuilding and electronics, and as a result, enhanced growth and employment (Poole & Mackworth 2010).

Legal issues

One of the greatest threats that could arise from embracement of AI is the lack of an effective legal framework for emerging technology. The need for a legal framework to regulate development of AI arose after several scientists expressed fears about the possibility of technological singularity that could pose an existential risk to human beings (Goodman 2015). AI has severe short term and long term effects. Therefore, law and policy makers need to full understand these implications in order to develop policies and laws that address them effectively. There have been arguments regarding the rapid expansion of AI. Some experts argue that there is need for regulation because at that speed, technological singularity could be experienced much faster that previously projected.

Examples of the most important legal issues in artificial technology include privacy, the use of personal data, and the liabilities of errors made by AI (Poole & Mackworth 2010). Many experts are worried about the processes that AI will use to capture data and how that data will be sued by organizations. On the other hand, it is important to have a legal framework to determine who will be responsible for the mistakes and errors committed by artificial beings (Russell & Norvig 2010). It is important t address these issues because AI is evolving at a rapid rate and it will be impossible to regulate it after the attainment of technological singularity.

The current evolution of supercomputers has blinded scientists and other experts who have focused all their attention on improvement of technology oblivious of the dangers of such rapid advancement without legal regulation. Many people have questioned how the huge amount of data will be managed in order to serve societal interests rather than the interests of AI. In that regard, there is need for a speedy solution to that problem because currently, humans are overly dependent on machines and effective measures have not been put in place to protect their privacy (Russell & Norvig 2010). The development of deep learning systems for AI has sparked several controversies because they have enabled AI to perform functions such as speech recognition, natural language processing, and computer vision (Vardi 2012).

These abilities can be used to personalize products and enable AI acquire knowledge by recognising patterns from available data. These technologies are equipping AI with capabilities to analyze human behaviour, desires, and moods. These capabilities have severe legal implications because predicting human behaviour encourages exploitation hence the need for regulation (Smith 2015). Machines can mine huge amounts of personal data in order to predict human behaviour. AI will initiate numerous changes in human life, hence the need for legislation to address these changes (Smith 2015).

Another legal issue concerns responsibility for liability for the errors made by artificial technology. Artificial intelligence can accomplish tasks such as information processing, comprehension of questions, and supply of answers or directions. However, it has not yet attained independence. Current advancements in AI possess limited capabilities and no self-awareness. Technological singularity is deemed an illusion because AI will need to reason, learn, plan, obtain expertise, communicate in natural languages, and incorporate these skills into a singular systems in order for it to surpass human intelligence (Vardi 2012).

It has not yet ben determined who would be responsible for programming errors in safety-crucial systems and how such mistakes would be corrected. Experts have argued that the U.S. National Academy of Engineering should conduct a thorough study of the implications of AI and provide policy recommendations to regulate the field.

Conclusion

In the last two decades, the science and technology of Artificial intelligence has initiated rapid advancements that have caused worries among scientists and experts in various field. AI research is increasing facilitating the creation of artificial beings that are getting smarter with technological advancement. This progression will have social, economic, legal, and ethical implications hence the need for regulation. Ethical issues in AI include robot ethics, machine ethics, and the use of AI as weaponry. Social issues include loss of jobs, replacement of human judgment by machine judgment, and alteration of traditional understanding of human beings. Legal issues include privacy, the use of personal data and liability for errors caused by artificial agents.

References

Acton, Q 2013, Issues in Artificial Intelligence, Robotics, and Machine Learning, Scholarly Editions, New York.

Bench-Capon, T. J & Dunne, PE 2007, “Argumentation in artificial intelligence”, Artificial Intelligence, volume 171, no. 10-15, pp. 619-641.

Bostrom, N 2014, Superintelligence: paths, Dangers, Strategies, Oxford University Press, London.

Frankish, K & Ramsey, WM 2014, The Cambridge handbook of artificial intelligence, Cambridge University press, New York.

Goodman, J 2015, Why We Have to Get Smart About Artificial Intelligence. Web.

Gunkel, DJ 2012, The machine question: critical perspectives on AI, Robots, and ethics, MIT Press, New York.

Lin, P, Abney K., & Bekey G 2011, “Robot Ethics: Mapping Issues for a Mechanized World”, Artificial Intelligence, vol. 175, no. 5-6, pp. 942-949.

Lohr, s 2011, Major jobs predicted for machines, not people,. Web.

Muller, V 2016, Risks of Artificial Intelligence, CRC Press, New York.

Payr, S 2011, “Social engagement with robots and agents: introduction”, Applied Artificial Intelligence: An International Journal, vol. 25, no. 6, pp. 441-444.

Poole, D. & Mackworth, AK 2010, Artificial Intelligence: Foundations of computational agents, Cambridge University Press, New York.

Robinson, R 2015, 3 Human Qualities Digital Technology can’t replace in the future economy: experience, values and judgment. Web.

Russell, S & Norvig, P 2010, Artificial intelligence: a modern approach, Prentice Hall, New York.

Smith, A 2015, Artificial Intelligence: The real threat with advances in AI may stem from our failure to create a policy framework for merging technology. Web.

Vardi, M 2012, The consequences of machine intelligence. Web.

Warwick, K 2013, Artificial intelligence, Taylor & Francis, New York.

Cite this paper

Select style

Reference

StudyCorgi. (2020, December 9). Artificial Technology’s Ethical, Social, Legal Issues. https://studycorgi.com/artificial-technologys-ethical-social-legal-issues/

Work Cited

"Artificial Technology’s Ethical, Social, Legal Issues." StudyCorgi, 9 Dec. 2020, studycorgi.com/artificial-technologys-ethical-social-legal-issues/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2020) 'Artificial Technology’s Ethical, Social, Legal Issues'. 9 December.

1. StudyCorgi. "Artificial Technology’s Ethical, Social, Legal Issues." December 9, 2020. https://studycorgi.com/artificial-technologys-ethical-social-legal-issues/.


Bibliography


StudyCorgi. "Artificial Technology’s Ethical, Social, Legal Issues." December 9, 2020. https://studycorgi.com/artificial-technologys-ethical-social-legal-issues/.

References

StudyCorgi. 2020. "Artificial Technology’s Ethical, Social, Legal Issues." December 9, 2020. https://studycorgi.com/artificial-technologys-ethical-social-legal-issues/.

This paper, “Artificial Technology’s Ethical, Social, Legal Issues”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.