The Future of Artificial Intelligence in Fiction and Science

Introduction

Although there are numerous technological advancements that are being introduced into society on a regular basis, not many of them have caused such a tremendous controversy as artificial intelligence (AI). Discussions regarding AI have caused many rifts among scientists and philosophers alike, affecting public opinion on this matter. Eventually, these debates created a distinct theme for dystopian works of art that present a dire future that may await humanity due to the actions of a rampant AI. It is essential for the discourse communities that can affect the perception of AI by the general public to improve the acceptance of this technology through positive representation. All involved sides must analyze the reasons behind the negative perceptions of AI and how changing the direction of such depictions can affect the common views on the topic. The goal of this essay is to prove that the cooperation between scientific and science fiction communities that revolve around AI is an essential step towards its destigmatization.

Discussion of the ‘Evilness’ of AI and Its Perceived Dangers

To begin discussing this topic, it is vital to acknowledge the long-standing issues with its representation in society. Such ideas can be traced in the works of many novelists of the past and current centuries, although starting with the nineteenth-century novelist Mary Shelley (Self). The notion of AI has been considered nearly impossible at the beginning, but the efforts of numerous roboticists led to the creation of complex algorithms that are nowadays known as artificial intelligence (Guy-Warwick 1). Alas, the topic remains controversial for the same ideas that led to its expansion, despite being currently in use for many improvements.

In modern culture, an evil AI mastermind is a common trope for many literary and cinematic works. Drawing inspiration from the initial musings about what an automaton operating purely on logic could be, many philosophers and novelists brought into existence such works as I, Robot, 2001: A Space Odyssey, and Terminator (Schmelzer). The fear of AI is the fear of an intelligent being whose abilities surpass those of a regular human. Nowadays, the demise of humanity by the hands of AI is one of the most widespread fears stemming from scientific advancements (Kupferschmidt 152). The likelihood of such an event remains yet to be determined, but claims remain baseless until such possibility actually appears.

A significant part of misinformation stems from fictional works that depict AI as evil entities that pose a major threat to society, as they have the power to affect the real-life decisions of the majority. The survey conducted by Edelman revealed that 81% of the general public and 77% of people involved in the practical implementation of AI are worried about further advancements of this technology (Edelman). Modern-day AIs are already capable of some forms of data manipulation that can be deemed threatening to society (Edelman). These facts signify that the reasons behind fears may not be irrational, but they can be fueled among the general public by the scenarios observed in movies and books, in addition to the lacking knowledge.

Discussion of the Discourse Communities

As has been said, there are two distinct discourse communities that must be assessed on their impact on the public image of AI. The first group is represented by the scientists that work closely on the development and practical implementation of AI into modern technologies. The second group is the science fiction community that is built around the depiction of AI in works of art, including both writers, scenarists, and fans that discuss their works.

The first group is, perhaps, has the highest potential to prove that the doomsday scenarios are to remain what they are – fiction. At the same time, the majority of AI-related studies by roboticists and IT specialists are centered around business applications, such as consumer behavior analysis, manufacturing optimizations, smart decisions in healthcare, and similar concepts (Guy-Warwick 20). These so-called ‘weak AI’ technologies that try to resolve a narrow task with the assistance of AI do not present any real threat to humankind, except the perceived loss of jobs (Guy-Warwick 1). General AI that would be able to think independently is out of the scope of modern companies, as the profit of such technologies could be challenging to realize.

In turn, the second group that consists of philosophers, writers, and science fiction fans often discusses general AI. There are countless books, movies, series, and games that show authors’ points of view on this technology or merely use monstrous AI as a plot device. They may not bear the idea of agitating people against the adoption of general AI as a standard task handler, yet they spread this idea (Schmelzer). The attraction towards the topic among this discourse community is clear, although the negativity turned into obsession could pose a threat.

There are significant differences between communities, as the focus of their works is rarely interlocked. It can be said that scientists are rarely bothered by the idea of the humanization of their creations. As a result, real-life robots may look and behave however scientists need, as long as they perform their functions and sell to the intended customers. However, the comparison of human and artificial thought processes versus emotional responses often plays a critical role in AI-centered fictional stories, such as A. I. Artificial Intelligence by S. Spielberg (Heffernan 11). Philosophical debates around the dangers of AI occur in both communities, yet they analyze its aspects of different scales and origins. It must be possible to address and eventually resolve fears of the general population that became common beliefs by connecting both discourse communities.

Problems to Solve

The fear of AI brings many issues that may be unclear at first. Kupferschmidt warns that discussions of fantasized scenarios “distract from real, solvable threats such as climate change and nuclear war” (152). The negative views on this category of technologies have real-life consequences, as masses refuse to trust machines. Schmelzer states that “popular representation of AI gone bad is causing a general wariness among the public surrounding the development of intelligent systems technologies.” For the general population, the difference between ‘weak AI’ and general AI is nonexistent, putting any products that use this category of technologies under the threat of being unjustly stigmatized.

It is natural for people to have different opinions on a topic, but such opinions must be tethered to real-world facts. There are apparent gaps in information commonly passed among IT specialists and the general public regarding AI (Edelman). Some of the wildest ideas regarding AI development can include doomsday scenarios, ranging from malfunctions in AI logic that lead to unplanned missile launches to AIs creating robot armies and exterminating humankind (Schmelzer). However, if readers are to look further into the nature of threats that stem from AI, they can notice that most tropes do not transfer the guilt of a fictional catastrophe onto AI. Malicious intents that these mechanical entities possess are mere reflections of humanity’s own insecurities, fears, and inner conflicts that are being projected through AI as a medium (Self). Scientists acknowledge that any trace of ‘evilness’ in AI would be the result of human interference, although it is rarely adequately expressed (Kupferschmidt 152). Even self-improving machines that are often depicted in fiction works can only be let loose by humans.

An unexpected source of a valuable opinion on the impact of AI monsters comes from an IT specialist working on such technologies. Being fueled by fictional factors, the fear of AI now prevents companies from advancing algorithms that are used in products due to them becoming too ‘unexplainable’ (Bloomberg). Simply said, people are not ready to trust computers with managing processes that require critical thinking. Bloomberg writes that “if people don’t know how AI comes up with its decisions, they won’t trust it.” This notion reveals that problems researched in this paper are very much real and already impact decisions that affect humanity’s progress. While scientists may struggle to find public support, works of art have the power to influence generations and can be employed as a useful tool for swaying public opinion.

This essay does not claim that such discussions must be prohibited, as ethical considerations regarding AI have revealed numerous concerns that can be proven to be valid, such as the increased wealth gap. However, the rejection of new technologies that show great promise in improving everyday lives is irrational and must be avoided by all means. For example, if an invention that would give AI significant power over the food production chain could save a country from starvation, it may be rejected by people who fear AI. Society needs to be more open-minded in regards to new technologies.

The Course of Action

As a potential solution to the issue, it might be possible to create a forum or an annual conference that will attempt to gather different opinions on the subject. The participants could work together on creating educational materials for the resolution of common myths and misconceptions. There are many options that can be utilized to promote the acceptance of AIs, most of which require simple knowledge distribution. Such options can be easily tethered to the already existing concepts and examples.

Although being highly sporadic, some of the depictions of AI is already beneficial to its image. K. Ishiguro’s Klara and the Sun is a notable example of a fiction that challenges the human-centric view on intelligence in a positive way, promoting acceptance of AI as a great source of support (Self). This goal is also being pursued by corporations such as Amazon and Google, whose digital assistants show nothing but usefulness yet might be impacted by mistrust and prejudice.

Moreover, it is possible for scientists to work closer with fiction writers on the representation of futuristic devices. For example, Star Trek fans can undoubtedly appreciate the existence of Google Translator, as a similar AI-powered device called Universal Translator was shown in Star Trek back in 1966 (Wallace). Menial chores are slowly being transferred to algorithm-based machines as well, such as Roombas (Wallace). Public campaigns that explain in-depth all the nuances of AI while highlighting its benefits can be highly efficient in resolving the situation.

The benefits of AI assistance in decision-making processes can resolve many human errors. Scientists already utilize AI to save human lives, for example, through healthcare optimization (Guy-Warwick 23). However, opponents of this innovation point out that responsibility for failures of AI will be unclear to the point of disruption of the healthcare system (Guy-Warwick 26). It is possible to achieve greater success in establishing precise bounds for such incidents with the help of philosophers who can establish what such a mistake signifies.

Purely practical applications of AI are functional, yet they are limited. There is an untapped potential in this technology that is being ignored out of uncertainties that can be otherwise resolved through discussions and analysis. On the example of Isaac Asimov, who was a novelist and whose works outlined the fundamental basis for robotics as a separate discipline, scientists can integrate their works with ideas drawn from science fiction.

Conclusion

In conclusion, the issue with negative perceptions of AI is detrimental to the technological progress of modern society, yet there is a way out of this situation. By choosing to show AIs as a major threat to humanity, people unintentionally promote hatred and distrust toward otherwise instrumental technology. Without a doubt, fiction works with AI monsters often present a thrilling story for people to read, but when representation remains overwhelmingly negative, it can blend with reality in the most unexpected ways.

However, there is a way out of the situation that lies in the cooperation of discourse communities and the reimagination of AIs in popular culture. Scientists possess the knowledge of the real-life application of AI, both current and predicted, while fiction writers have the influence necessary to turn public opinion onto a positive side. Together, those who are interested in the future of this technology can work towards its acceptance. Human-like AI may seem like the technology from the distant future, but cooperation can bring it closer to reality while making it safer for humanity based on shared ethical concerns.

Works Cited

Bloomberg, Jason. “Why People Don’t Trust Artificial Intelligence: It’s an ‘explainability’ Problem.” Genetic Literacy Project, 2018, Web.

Edelman. “2019 Artificial Intelligence Survey.” 2019, Web.

Guy-Warwick, Evans. Artificial Intelligence: Where We Came From, Where We Are Now, and Where We Are Going. 2017. University of Victoria, MS thesis. Web.

Heffernan, Teresa. “A.I. Artificial Intelligence: Science, Fiction and Fairy Tales.” English Studies in Africa, vol. 61, no. 1, 2018, pp. 10-15, Web.

Kupferschmidt, Kai. “Taming the monsters of tomorrow.” Science, vol. 359, no. 6372, 2018, pp. 152-155, Web.

Schmelzer, Ron. “Should We Be Afraid of AI?” Forbes, 2019, Web.

Self, John. “The Frankenstein’s Monsters of the 21st Century.” BBC, 2021, Web.

Wallace, Brian. “Artificial Intelligence and Science Fiction: A Swapped Reality.” Dumb Little Man, 2019, Web.

Cite this paper

Select style

Reference

StudyCorgi. (2022, December 20). The Future of Artificial Intelligence in Fiction and Science. https://studycorgi.com/the-future-of-artificial-intelligence-in-fiction-and-science/

Work Cited

"The Future of Artificial Intelligence in Fiction and Science." StudyCorgi, 20 Dec. 2022, studycorgi.com/the-future-of-artificial-intelligence-in-fiction-and-science/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2022) 'The Future of Artificial Intelligence in Fiction and Science'. 20 December.

1. StudyCorgi. "The Future of Artificial Intelligence in Fiction and Science." December 20, 2022. https://studycorgi.com/the-future-of-artificial-intelligence-in-fiction-and-science/.


Bibliography


StudyCorgi. "The Future of Artificial Intelligence in Fiction and Science." December 20, 2022. https://studycorgi.com/the-future-of-artificial-intelligence-in-fiction-and-science/.

References

StudyCorgi. 2022. "The Future of Artificial Intelligence in Fiction and Science." December 20, 2022. https://studycorgi.com/the-future-of-artificial-intelligence-in-fiction-and-science/.

This paper, “The Future of Artificial Intelligence in Fiction and Science”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.