AI, Human Control and Safety

Introduction

The given evaluative analysis will primarily focus on the topic of artificial intelligence and human safety. The target of evaluation and analysis will be Sam Harris’s TED talk titled “Can we build AI without losing control over it?” where the neuroscientist lays out a set of arguments demonstrating the increased risk of failing to ensure safety conditions. The core of the talk addresses the AI development industry and current political as well as economic systems, which are vulnerable and unprepared for general AI creation.

Summary

The primary audience for the selected discipline text involves any individual interested in the ethics and safety of technological innovation. The secondary audience can include people with interests in the future of human progress and its dependency on AI. It is important to note that Sam Harris’s argumentation stems from three basic assumptions. Firstly, he states that intelligence is directly dependent and defined by information processing, which means that any appropriate structured physical system can become intelligent if it can process information (Harris). Secondly, the author claims that humanity will continue developing and improving technology related to intelligent machines (Harris). Unless a major catastrophe halts human progress permanently, people will eventually give rise to superintelligence. Thirdly, the spectrum of intelligence extends far beyond human intelligence, which means that even the smartest individuals are inferior to superintelligence.

Evaluation Within Conversation Context

The context of the conversation is centered around humanity’s need to survive and prosper long after the invention of super-intelligent AI, and the current needs involve setting the necessary conditions to ensure the full safety of such an AI launch. It is in humanity’s best interest to be able to control or make it safe to create AI, which possesses superior intelligence. Sam Harris accurately points out the fact that the current political and economic conditions do not favor the given shift since AI is an arms race between nations and companies. Therefore, they are incentivized to be first in the creation of such as AI, which makes the interest in AI safety a secondary priority due to this ‘winner takes all’ scenario. However, the speaker underestimates human adaptability to technology despite mentioning that humans and AI could merge.

It should be noted that Sam Harris’s warnings and concerns are valid, but humans are capable of utilizing technology in synergy. Smartphones are devices with a specialized form of intelligence that can perform a number of specific tasks superiorly to humans, such as distance communications, data analysis, and the display of measurements. Humans no longer need to use the brain’s memory capacity to remember ideas or situations because the phone can perfectly recall through pictures, internet searchers, etc. The reduced need to memorize did not diminish human intelligence but rather extended it since a person can focus his or her attention on more creative, intelligent activities. Humans are adaptable and will be able to incorporate AI into the current systems, and the lack of need for intelligence will most likely result in humans being focused on humanistic endeavors. Therefore, the presentation raises valid and logical concerns about AI but dismisses the innate nature of human adaptability to technology.

Rhetorical Context

Sam Harris’s main goal of the talk is to provide core arguments about human unpreparedness and improper emotional response to AI. The neuroscientist utilizes logos as an appeal because he lays out three major logical assumptions and considers them in the context of the current economic and political systems. As evidence, Sam Harris uses factual information about the speed of information processing in computers, which is a million times faster. He also presents evidence from the AI industry itself, quoting the current leaders in the field in order to show their unpreparedness as well. Sam Harris uses a comparative technique, such as analogy, to demonstrate the intelligence spectrum and timeframe of development in a new unbiased perspective. For example, he shows the positioning of humans in the intelligence spectrum and the limitedness of a 50-years period to ensure safety conditions (Harris). These examples and statements are effective at illuminating the issue to the public, but he fails to show the current achievements and advancements in the field in regard to AI technology safety measures.

Synthesis of Sources

One of the major pitfalls of the presentation is the lack of analysis of the current AI safety measures. Experts in the field of AI state that “independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails, and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge” (Falco et al. 566). In other words, the principles of legal adherence, risk analysis, and operational pathways can be used to ensure safe AI utilization. It is also stated that although “AI safety research is not sufficiently anticipatory,” there are specific techniques and artifacts which can ensure AI safety (Hernando-Orallo et al. 2521). In addition, some experts suggest that the first super-intelligent artificial general intelligence must be launched with the sole purpose of ensuring all other AI safety for humans (Turchin et al. 16). Therefore, there are plausible and already developed solutions when it comes to development of AI.

The idea of co-evolution has become widespread in the modern information environment. It provides for the gradual convergence of two relatively opposite trends. One of them involves certain changes in the subjects themselves under the influence of external and internal factors. The other is directed outward, that is, at the formation of such objects that optimally correspond to the social and biological nature of humans, its further strengthening and development. These two mutually complementary processes have features of absoluteness and relativity. The modus of absoluteness is due to the fact that subject and object, living and non-living, natural and artificial, cannot completely lose their certainty and turn into single undifferentiated integrity. At the same time, the opposition between the natural and the artificial is, in a certain sense, relative. The co-evolution framework assumes that the subject and the object, such as the living and the inanimate or the natural and the artificial, are two integral components of a single system, and thus, there are no impenetrable boundaries between them. The distance between these two entities of first and second nature continues to shrink.

Conclusion

In conclusion, the presentation by Sam Harris is valid and well-structured since it appeals to the audience’s logic and needs for safety. The speaker outlines three sound and accurate assumptions about human progress and intelligence in order to raise valid concerns about AI safety. However, the neuroscientist fails to illuminate the current developments and propositions in the AI safety field of study, where a wide range of propositions are made with plausible plans.

Works Cited

Falco, Gregory, et al. “Governing AI Safety Through Independent Audits.” Nature Machine Intelligence, vol. 3, 2021, pp. 566-571.

Harris, Sam. “Can We Build AI Without Losing Control Over It?” TED, uploaded by TED Summit, 2016, Web.

Hernando-Orallo, Jose, et al. “AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues.” 24th European Conference on Artificial Intelligence – ECAI 2020, vol. 325, 2020, pp. 2521-2528.

Turchin, Alexey, et al. “Global Solutions vs. Local Solutions for the AI Safety Problem.” Big Data and Cognitive Computing, vol. 3, no. 1, 2019, pp. 16-39.

Cite this paper

Select style

Reference

StudyCorgi. (2022, December 14). AI, Human Control and Safety. https://studycorgi.com/ai-human-control-and-safety/

Work Cited

"AI, Human Control and Safety." StudyCorgi, 14 Dec. 2022, studycorgi.com/ai-human-control-and-safety/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2022) 'AI, Human Control and Safety'. 14 December.

1. StudyCorgi. "AI, Human Control and Safety." December 14, 2022. https://studycorgi.com/ai-human-control-and-safety/.


Bibliography


StudyCorgi. "AI, Human Control and Safety." December 14, 2022. https://studycorgi.com/ai-human-control-and-safety/.

References

StudyCorgi. 2022. "AI, Human Control and Safety." December 14, 2022. https://studycorgi.com/ai-human-control-and-safety/.

This paper, “AI, Human Control and Safety”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.