Ethics of Computer Influence on Human Lives

Introduction

When it comes to the interconnection of individuals and organizations, computers have no limitations. If data transfers occur without any physical connection, a person may not have enough time to adequately examine the transmission’s results (Bhargava 5; Cuesta 2; Martin et al. 308; Susser et al. 2019). Because of technology improvements such as high-speed connections and multiple-copy distribution, new ethical challenges involving rights and values have emerged. Given today’s technological breakthroughs, the influence of computers on human lives and their ethical implications should be re-examined (Martin et al. 308; Peters et al 1). The objective of this research is to investigate the ethical implications of technology and its positive and negative effects on human welfare.

Motivation

The ethical implications of emerging technologies have been looked at to some extent. However, little research has been done to look at how organizational and management constraints might affect these decisions. Corporate privacy cultures have been found to affect how privacy and security design decisions are made (Martin et al. 308; Peters et al. 1; Susser et al. 2019). There is a strong reason to investigate the ethics of how computers affect people’s lives and general well-being.

Explanation of the Topic

The ethical difficulties surrounding the use of human-enhancement technology are discussed and analyzed in this article. It starts by discussing the ethical implications of technology on autonomy in decision-making as well as the ethical implications related to trusting technology with human welfare. Following that, it looks at the ethics concerning technological ill-being in the workplace. Research limitations and recommendations for the future are afterward provided.

Background

Due to ethical concerns, individuals may be opposed to the use of technology. Opposition to new ideas, the acquisition of new skills, and the exchange of information across groups and professions may all be manifestations of this (Cuesta 2). For instance, because of concerns regarding patient safety and treatment quality, as well as respect for persons with disabilities, ethical objections may arise (Cuesta 2). Every piece of technology has the potential to have a beneficial or bad influence on mental health. This could include the repercussions of smartphones’ immediate connectedness. According to Peters et al. (1), checking email often during the day raised stress levels, while using a mobile phone lowered the quality of face-to-face interactions. Additionally, technology may be developed to actively improve or influence people’s emotions, and interface designers have switched their attention from basic usefulness to making items appealing and engaging in order to increase adoption (Martin et al., 308).

On the other hand, people’s reluctance to change is often fueled by a fear of losing some aspect of authority or control over their lives, as well as their feeling of personal or professional integrity. Because of how technology is conceived, there exists a range of assumptions regarding how it should be utilized (Cuesta 3). These assumptions could be in the form of prejudices or beliefs that are both inadvertent and purposeful, particularly as artificial intelligence (AI) technology and self-driving digital systems are becoming more and more significant in individual, business, and government choices (Martin et al. 308). As a result of contemporary technology being used in the home, some people may feel isolated. It is possible that conflicting objectives exist, but it is also true that preserving the integrity, dignity, and vulnerability of users can be difficult. Hence, it is important to look at the ethical implications of current and emerging technologies and their influence on humanity.

The ethical implications of technology for decision-making autonomy

To a considerable extent, technology has denied users the ability to make autonomous decisions. The ethical principle of autonomy requires that individuals reserve the right to make independent decisions on issues affecting their lives (Martin et al. 311). When it comes to making better judgments and reducing information overload, digital technologies such as intelligent agent-assisted decision support systems (DSS) are increasingly being employed in the workplace (Ulfert et al. 1). On the basis of its capabilities and applications, DSS may range from simple systems like email spam filters to more complex systems like DSS for cancer detection (Ulfert et al.3). However, as artificial intelligence improves, DSS will become increasingly self-sufficient and capable of performing a wider variety of tasks. DSS adoption at work continues to pose challenges since employees struggle to adjust, despite the potential benefits for people’s information processing demands. On the other hand, DSS use has a negative impact on employees’ autonomy by denying them the opportunity to make independent decisions and at the same time exposing them to extensive monitoring. For example, a highly autonomous DSS can manage large amounts of data, decreasing the information load on staff (Ulfert et al.4). When many parts of a job are outsourced to a system, employees’ activities and duties change, increasing the need for monitoring, which has been related to increased expectations.

Another way in which technology has denied humans autonomy in decision-making can be analyzed on the basis of addiction to certain forms of technology. A number of other studies have also looked at how technology might make people less free or independent (Bhargava 5; Martin et al. 311; Ulfert et al.3). To keep gamers coming back for more, many video games and online platforms use dopamine-releasing mechanisms to keep people interested (Martin et al. 311). Gambling machines have long been designed to make people addicted, and games and social media products do the same thing. They make people crave positive feedback from interactions to the point that their lives may be thrown off (Bhargava 5). Addiction-inducing design patterns are used by technology companies to get people to buy their products.

To get people to buy something, addictive design plays on people’s natural instincts to make them give up their freedom to make decisions. Addictions have been suggested to contribute to some of the new harmful behaviors that have emerged in the information technology era (Sigerson et al. 520–526). It is also at the foundation of a variety of information technology addictions, though this notion has not been extensively tested by scholars. In their study, Sigerson et al. (520) examined addictions to information technology in relation to gambling problems and alcoholism disorders. From their confirmatory factor analysis, the researchers observed that several forms of information technology addiction have a common feature (Sigerson et al. 520). Sigerson et al. (526) concluded that information technology addiction is more similar to other cognitive dependencies than substance-related addictions. For instance, gambling addiction was shown to be more closely connected to an addiction to information technology than drinking.

Similarly, there is a lot of technology that puts a lot of emphasis on things like machine learning (ML) algorithms, which make it possible for autonomous systems to operate. Machine learning technology plays a key part in automating choices in marketing, operations, and financial management (Martin et al. 308). In ML systems, algorithms utilise a lot of data to figure out what to do. As algorithms learn and make judgments, there are issues regarding the ethics of how they use the data they collect (Susser et al. 2). If biased data is used to make business decisions, for example, it could lead to judgments that are unfair to people based on their race or gender

Another example of data snooping that can interfere with people’s ability to make their own decisions is when companies target consumers based on aggregated data from their technology use. It is very rare for data aggregators to get enough information about clients to figure out what their problems or desires are, and then use this information to target ads more specifically (Martin et al. 312). Personal information, like the location of a cell phone and browsing history on the Internet, can be used to target consumers in ways that make it hard for them to make different decisions. If marketers think that an internet user has depression based on what the user searches for or where he or she goes, they might try to reach the user with herbal treatments (Susser et al. 2). It is possible to be the star of food and casino ads when you are on a strict diet or have just stopped gambling. Business ethics has a long history of using marketing strategies that target vulnerable people in ways that make them less able to make their own decisions.

The Cambridge Analytica scandal is one of the best-known examples of people using marketing to get people to vote the way they wanted to. The Facebook-Cambridge Analytica scandal made international headlines in March 2020 (Hu 1). In 2016, Cambridge Analytica, a data analytics firm, was accused of using 87 million Facebook accounts to develop psychographically targeted adverts in an attempt to sway voters in the 2016 US presidential election (Hinds et al.). The key to personal freedom is that political decisions can now be influenced in the same way that economic decisions can be influenced. On the other hand, the extent of these worries has grown. Many individuals realised that the hazards of targeted advertising were not limited to the commercial arena when concerns were raised in 2016 and 2017 about the use of information technology to influence global elections. The ad targeting tools on Facebook, YouTube, and other social media sites may have a significant impact on voters’ decision-making and behavior (Susser 1). When Cambridge Analytica, a data analytics firm, became engaged in the voter profiling and advertising scandals, it brought these issues to the attention of the general public. Advertisements were allegedly tailored based on people’s “inner demons,” according to the business.

Ethical implications related to trusting technology with human welfare

Emerging technologies like artificial intelligence (AI) cannot be trusted because they have diminished the importance of trust in human interactions and shifted accountability for errors away from those who create and use them. AI refers to machines that have intelligence beyond that of humans, as they are capable of executing complex tasks and processing data in a human-like way (Ryan 2751). This intelligence is unique from natural intelligence since it is artificially or machine-created. Ryan 2751). Image processing, audio synthesis, and natural language production are examples of computer systems that can operate and respond in human-like ways. Researchers have shown that a person’s willingness to adopt self-driving vehicles and other forms of AI is strongly correlated with their level of confidence in such systems (Lockey 5463). Much of the current discussion on AI, big data, algorithms, and internet platforms is focused on whether these sets of technology can be trusted (Ryan 2749). Ryan 2751).

One of the most challenging aspects of evaluating AI is people’s desire to humanize it. When human moral behavior is integrated with artificial intelligence, trusting the technology becomes considerably more challenging. Trust is one of the most important and defining qualities of human relationships, so the notion that AI can be trusted is contentious. According to Ryan (2749), AI cannot be trusted since it lacks emotional intelligence and the ability to be held accountable for its actions, both of which are necessary for normative accounts of trust. Additionally, AI should not be trusted because it diminishes the importance of trust in human interactions and shifts accountability away from those who create and use it. While AI satisfies all of the logical criteria for trust, it is more of a kind of reliance than a form of trust (Martin et al. 311). Finally, AI should not be trusted because it undermines interpersonal trust and shifts responsibility away from those who create and use AI.

It is also important to look at three levels of trust in business ethics. These three levels are an individual’s general level of trust, an individual’s trust in a specific form, and an individual’s institutional trust in a market or community (Stilgoe, 25). Every level of technology needs to know about ethical issues. When it comes to privacy, users who are more trusting are more likely to be concerned about their privacy. Technology users who are more trusting may have high privacy standards, but they are less concerned about bad company behavior (Martin et al. 313). Users may have less faith in a technology because of how it is developed and provided. In some cases, users overestimate how important certain technologies are to them because of how they look. On the other hand, there may be a fourth level of trust that comes from this problem. This level of trust is only for businesses that are developing new digital technologies (Ryan 2761). Automated data analysis and algorithmic decision-making have become increasingly common in diagnostic health care decisions. The need for trust is greater in these kinds of computer applications.

Autonomous systems have the same problems because of issues of misplaced trust. Ryan (2761) demonstrates the difference between AI betrayal and unhappiness. To alleviate Ryan’s worry in the future, an AI driver named “Johnny-Bot” will accompany him in his self-driving automobile. Because he has to get his wife, Joanne, to the hospital, he tells the Johnny Bot to blow through a red light, but the AI refuses because his job is to keep the people in the vehicle safe. The robot also resists Ryan 2761’s attempts to physically take control of the car. Ryan accuses’ him’ of failing Johnny Bot, who has the capacity to make clear judgments (Johnny Bot). In such situations, Ryan (2761) explains, “it would appear that he may have placed a degree or quantity of reliance in “Johnny” that was not justified. This is a clear instance of misplaced trust. Johnny Bot was disappointed in him because of its limited autonomy, but Ryan argues that it would be unfair to blame him for it since he could not have done anything else freely given the exact software programming code encoded in him.

Similarly, self-driving automobiles, the pinnacle of “smart” technology, are not pre-programmed with instructions on how to drive. As technology advances, the algorithms that regulate their motions improve (Stilgoe, 25). Self-driving automobiles are putting machine learning and social cognition in technology governance to the test. Technology and society are always learning from one another (Thompson). The trajectory and rhetoric of machine learning in transportation provide a significant governance challenge, focusing on the social learning achievements and failures surrounding the well-publicized 2016 Tesla Model S accident (Stilgoe 25). A man died in an accident in Florida in May 2016, while using an autopilot that was driving the car at 74 miles per hour in Williston, Florida (Thompson). Hence, when it comes to completely trusting AI, it is critical to argue that the terms “autonomous” and “self-driving” are both deceptive. They are founded on assumptions about societal needs, solvable issues, and economic potential, much like other technologies.

The ethics concerning technological ill-being in the workplace

Technology today bombards individuals with information overload, leading to a scenario where they are denied the freedom to engage in meaningful rest. As more employees engage with ubiquitous information technology, the concept of “technological ill-being” has permeated the workplace. According to Leclercq-Vandelannoitte (339), technological ill-being is the tension or disconnect between an individual’s social characteristics and goals while using modern information technology. He examined how technological ill-being is manifested in enterprises and the extent to which managers are aware of this concept. Through a case study, Leclercq-Vandelannoitte (339) addressed these issues through the lens of Foucauldian theory. In his view, technology has taken away the freedom of humans, rendering them slaves to technology.

A worldwide automotive manufacturer, such as Toyota, demonstrates how widespread IT deployment and a strong commitment to corporate social responsibility may coexist. One of this company’s distinguishing characteristics is that each of its geographically separated divisions developed its own organizational structure and work arrangements. Not as part of larger corporate social responsibility (CSR) programs (Martin et al. 314). In this respect, technological ill-being is viewed as having localized implications for specific workers. Indeed, before technology took over the workplace, clients and colleagues could sit through two-hour business lunches in the hopes of receiving a response. In recent years, smartphones have been frequently placed on the table as if they were part of the table setting during business lunches (Shanahan). There is also a risk of being seen as professional or hostile for failing to reply to calls, emails, or instant messages within the given period. Employees may now work from anywhere because of the spread of mobile devices such as smartphones, tablet computers, laptops, and the cloud. As a consequence, employees are constantly assaulted with information from all sides and at all times as a consequence of this “freedom,” which is one of the most painful compromises for this “freedom” (Shanahan). Businesses are falling victim to key traps created by today’s fast-paced environment and the ever-increasing flow of data. A lot of the time, C-suite executives are not very involved in a company’s day-to-day operations and have to be told about them by lower-level employees.

Because of the frenzied pace and continual flow of information, businesses in a highly mobile environment face significant risks. There are various cases when business executives are less engaged in a company’s day-to-day operations and depend on lower-level employees to bring them to their attention. Hence, employees have to constantly stay hooked on their computers to monitor business transactions lest a risk of loss materializes (Shanahan). While the company’s top executives were either attending off-site meetings or vacationing abroad, a sophisticated cyber-phishing effort targeted one of the financial services company’s mid-level finance officers. Before senior executives questioned the daily record of cash transactions, the fraudsters were able to persuade the finance officer to make several wire transfers from the firm to accounts in China (Stilgoe 27). Internal controls, such as double signatures on wire transfers, holiday coverage, and training on suspicious emails, may have stopped the mid-level financial officer from going through with the transaction, which would have cost the company a lot of money.

Technology has also increased the monitoring of employees. Workplace digital health technology includes accelerometers, which record bending, standing, and walking movements, as well as wearable sensors for detecting fatigue. These technologies attempt to automatically monitor and intervene in worker behaviour using digital tools such as smartphones or stand-alone digital apps. Employers are combining traditional health practices with a range of digital technologies (Roossien et al. 2). Hence, human behavior in the workplace is significantly influenced by technological advancements. This is the case where Toyota, as the employer, wants employees to lose their autonomy and control of time to technology. Employees, in turn, become disengaged. Tone, questions, conversation, and body language are all absent in electronic communication between managers and employees in the company (Leclercq-Vandelannoitte, 342). Employee morale goes down because they aren’t able to take time off for meaningful rest.

Limitations and Future Developments

There are several limitations to consider. There is a need for further confirmation and refinement of the three topical issues: the ethical implications of technology on autonomy in decision-making; the ethical implications related to trusting technology with human welfare; and ethics concerning technological illbeing in the workplace. This study relied on a review of existing literature, which implies that the findings are approximations based on previous research. For this reason, even though some of the findings are well-supported, it remains an untested theory in the context of this research and needs additional study. Future researchers could consider examining these three areas using quantitative surveys and individual emerging technologies and the ethical dilemmas they present to the workplace, particularly in the health sector. This will make it easier for people who make technology to think about how it will affect people in the future.

Conclusion

Technology has taken away consumers’ capacity to make independent judgments to a large degree. According to the ethical concept of autonomy, individuals should be able to make autonomous decisions on issues that affect their lives (Martin et al. 311). Another way that technology has robbed people of their autonomy in decision-making is via addiction to specific types of technology. Emerging technologies, such as artificial intelligence (AI), cannot be trusted because they have reduced the necessity of trust in human relationships and transferred responsibility for mistakes away from the people who make and use them. Individuals are bombarded with information overload as a result of modern technology, denying them the opportunity to participate in meaningful rest. The idea of “technical ill-being” has penetrated the workplace as more workers interact with pervasive information technology.

Learning Outcomes

I have learnt about the potential effects of computer technologies on individual, business, or community rights and obligations, which I may find bothersome, unfair, or difficult to comprehend. This is important for me since, as new ICTs continue to emerge, a number of ethical dilemmas are anticipated to surface. For me, there is a good chance that the ethical challenges I identified will be significant across many kinds of ICT. However, I believe there is a lot of overlap between them. I have also learnt that knowledge of the nature of these ethical challenges is a prerequisite for conducting responsible research and innovation. Understanding these ethical considerations may aid technology designers, developers, scholars, and even policymakers in considering potential technological trajectories in light of the costs and benefits they bring to humans.

There were no significant challenges in research, as it was basically desk research whereby all articles used were retrieved from the internet. However, selecting the right data for review and creating consistent themes or topics was the challenge. I believe that the resulting article contributes significantly to the area of ethics research by going beyond concentrating on the ethical implications of a new ICT. I am motivated to undertake more comprehensive research in the future to ensure a thorough understanding of these challenges.

Works Cited

Bhargava, Vikram and Manuel Velasquez. “Ethics of the Attention Economy: The Problem of Social Media Addiction.” Business Ethics Quarterly, vol. 31, no. 3, 2021, pp. 1-24.

Cuesta, Marta et al. “Welfare Technology, Ethics and Well-Being a Qualitative Study about The Implementation of Welfare Technology Within Areas of Social Services in a Swedish Municipality.” International Journal of Quality Studies on Health Well-being, vol. 15, no. 1, 2020, pp. 1-16.

Hinds, Joanne, et al. ““It Wouldn’t Happen to Me”: Privacy Concerns and Perspectives Following the Cambridge Analytica Scandal.” International Journal of Human-Computer Studies, vol. 143, no. 1, 2020, pp. 1-15.

Hu, Margaret. “Cambridge Analytica’s Black Box.” Big Data & Society, vol. 1, no. 1, 2020, pp.1015.

Leclercq-Vandelannoitte, Aurélie. “Is Employee Technological “Ill-Being” Missing from Corporate Responsibility? The Foucauldian Ethics of Ubiquitous IT Uses in Organizations.” Journal of Business Ethics, Springer, vol. 160, no. 2, 2019, pp. 339-361.

Lockey, Steven et al. “A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions.” Hawaii International Conference on System Sciences, pp. 5463-5474.

Martin, Kirsten, et al. “Business and the Ethical Implications of Technology: Introduction to the Symposium.” Journal of Business Ethics, vol. 160, no. 1, 2019, pp. 307–317.

Peters, Dorian et al. “Designing for Motivation, Engagement and Wellbeing in Digital Experience.” Frontiers in Psychology, vol. 9, no. 797, 2018, pp. 1-15.

Roossien, Charlotte et al. “Ethics in Design and Implementation of Technologies for Workplace Health Promotion: A Call for Discussion.” Frontiers in Digit Health, vol 3, 2021, pp.1-14.

Ryan, Mark. “In AI We Trust: Ethics, Artificial Intelligence, and Reliability.” Science and Engineering Ethics, vol. 26, 2020, pp. 2749–2767.

Shanahan, Katy. “Technology’s Impact on Integrity and Business Practices.” KROLL, 2017, Web.

Sigerson, Leif, et al. “Examining Common Information Technology Addictions and their Relationships with Non-Technology-Related Addictions.” Computers in Human Behavior, vol. 75, 2017, pp. 520-526.

Stilgoe, Jack. “Machine Learning, Social Learning and The Governance of Self-Driving Cars.” Social Studies of Science, vol. 48, no. 1, 2018, pp. 25–56

Susser, Daniel et al. “Technology, Autonomy, And Manipulation.” Internet Policy Review, vol 8, no 2, 2019, pp.1-15.

Thompson, Cadie. “New details about the fatal Tesla Autopilot crash reveal the driver’s last minutes.” Business Insider, 2017, Web.

Ulfert, Anna-Sophie et al. “The Role of Agent Autonomy in Using Decision Support Systems at Work.” Computers in Human Behavior, vol. 126, 2021, pp.1-12.

Cite this paper

Select style

Reference

StudyCorgi. (2023, March 20). Ethics of Computer Influence on Human Lives. https://studycorgi.com/ethics-of-computer-influence-on-human-lives/

Work Cited

"Ethics of Computer Influence on Human Lives." StudyCorgi, 20 Mar. 2023, studycorgi.com/ethics-of-computer-influence-on-human-lives/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2023) 'Ethics of Computer Influence on Human Lives'. 20 March.

1. StudyCorgi. "Ethics of Computer Influence on Human Lives." March 20, 2023. https://studycorgi.com/ethics-of-computer-influence-on-human-lives/.


Bibliography


StudyCorgi. "Ethics of Computer Influence on Human Lives." March 20, 2023. https://studycorgi.com/ethics-of-computer-influence-on-human-lives/.

References

StudyCorgi. 2023. "Ethics of Computer Influence on Human Lives." March 20, 2023. https://studycorgi.com/ethics-of-computer-influence-on-human-lives/.

This paper, “Ethics of Computer Influence on Human Lives”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.