Risks of Artificial Intelligence Data-Mining by Tech Corporations

With an exponential advancement of Artificial Intelligence (AI), the notion of data being as valuable as oil regarding marketability has permeated the cultural zeitgeist, witnessed by innovations ranging from voice recognition to self-driving cars. With data now being the lifeblood of tech corporations, the extent to which they can tap into these unprecedented opportunities raises concerns about privacy and security risks, regulatory issues, and the cost of economic growth. These concerns are reflected in the erosion of privacy, distrust of AI systems, lack of regulatory ambition in the world’s tech hub US, and the need for improved regulations as demonstrated by advertisement targeting.

The aggregation of personal data by tech corporations for profit poses risks of societal repercussions of compromising the right to informational privacy, the right to control the flow of one’s personal information. Regarding the consequences of unchecked corporate sway, Wang and Siau put the social, health, and financial repercussions of depriving people of this right at the center of concern (66). In contrast, Manheim and Kaplan highlight how the violation of privacy rights that amassing personal data through AI systems entails could undermine the fundamental precepts of democracy, signaling entering “the age of privacy nihilism” (118). However, Johnson argues that since corporations already follow baseline standards in cybersecurity, enhancing their compliance requirements towards greater protection of privacy rights through advanced regulations would not constitute a formidable challenge. While all three sources recognize the societal risks of the upheaval of privacy rights by AI technologies, the research papers appear to differ on the characteristic implications of these risks. The magazine article, in contrast, stresses that, if enacted, enhanced regulations would render these symptomatic implications irrelevant to uprooting the cause of privacy concerns, the AI mismanagement.

Furthermore, the low interpretability of AI systems’ inner structures might intensify these concerns. Wang and Siau note that the trustworthiness of AI technologies is limited by the opaque “black box” system within their inner workings (69). Manheim and Kaplan agree that limited transparency can undermine users’ trust, adding that the “surveillance ecosystem” established by tech companies enables AI systems to extract sensitive data surreptitiously, reducing their reliability (126). Moreover, Johnson remarks that it is unclear whether the trustworthiness of AI systems corresponds to risk level, adding that even apparently low-risk systems can yield unreliable results due to intrinsic human bias in aggregated data. Both studies agree the opaqueness of AI systems can reduce their trustworthiness, while Manheim and Kaplan also recognize the covert nature of AI data mining, and Johnson pinpoints the inherent human prejudice within these systems.

Considering the US legislative initiatives, both studies concur that AI regulation proposals significantly lag in addressing privacy and security concerns. For instance, Wang and Siau cite the limited transparency of current AI algorithms as why legal systems fail to implement AI-specific privacy protection policies (70). In contrast, Manheim and Kaplan argue that the devaluation of information privacy impels regulatory entities to continue ignoring these concerns (129). Nevertheless, all three sources highlight the pronounced decisiveness with which the EU is actualizing AI regulatory initiatives. For example, Johnson notes that if ratified, the EU’s AI Act will serve as one of the world’s first policies on protecting individuals from the dangerous implications of AI technologies. Manheim and Kaplan note that sanctions under the EU’s General Data Protection Regulation have inspired tech giants, including Google, to transform their data management AI practices (162). Furthermore, Wang and Siau adduce the EU’s Civil Law Rules on Robotics as the earliest attempt at regulating AI systems (70). All three sources exemplify the characteristic resoluteness of the EU’s regulatory landscape compared to the US.

While AI’s complex inner workings and the degradation of personal privacy certainly hinder legislative ambition, the EU’s precedents suggest that the lack of regulatory urgency could indicate a more nuanced US government-enterprise relationship. Indeed, the exploitation of personal data for capital gain, condoned by regulatory passivity, comes to form what Manheim and Kaplan call “surveillance capitalism” (124). As Johnson notes in the discussion of the AI Act, tech giants, apprehensive of this initiative, publicly lobbied EU policymakers to shape the contours of this regulatory proposal to their financial interests. Furthermore, Wang and Siau suggest that Silicon Valley could find itself increasingly irritated by proposed AI regulations, as continuous automation of previously human work may compel lawmakers to enact stricter tax policies (68). Similarly, Manheim and Kaplan state that considering data’s instrumental importance in AI-driven business models, tech corporations “will always push legal and ethical boundaries in pursuit of collecting more data” (119). All three sources agree that tech enterprises often willingly coerce legal institutions to attain regulatory capture and amend regulations to their corporate interests.

The surveillance ecosystem powered by AI-driven data acquisition makes user data the highest value commodity, and mining it is the revenue stream for these companies. As Manheim and Kaplan note, the products offered within this ecosystem are merely vehicles to amass personal user data, rendering the users the product (119). As both studies agree, one of the apparent methods these companies monetize data is by selling it for targeted advertising purposes (Wang and Siau 67), (Manheim and Kaplan 124). Through acquiring data about users’ consumer behavior, purchase habits, and preferences, AI systems can generate crystallized predictions about future customer needs to produce user-specific persuasive advertisements. As Johnson notes in the discussion of the EU’s AI Act, Facebook expressed worries if the mandate to “ban subliminal techniques that manipulate people” also interpolates to targeted advertising. All three sources stress the magnitude of the deployment of these manipulative tactics, raising concerns about the degradation of personal data into a commodity for tech corporations and the regulation of their predatory behavior.

These AI-driven ad-targeting companies do not only capitalize on personal and often sensitive data but do so inconspicuously, justifying it as “informing” users’ choices. As Manheim and Kaplan suggest, by covertly keeping tabs on user profiles, these targeted advertisements can quickly devolve into subtle yet effective persuasion techniques (130). While both studies indicate that behavioral advertising jeopardizes the right to informational privacy, Manheim and Kaplan argue that maneuvering predictions about user behavior also imperil the right to decisional privacy (67), (130). So much so that Manheim and Kaplan expand on the latter notion, arguing that even individual “influencers” who attempt using such manipulation techniques frequently “tread too closely to the line separating persuasion from coercion.” (130). Johnson notes that among the top concerns of the EU’s AI Act was the restriction of high-risk AI systems, including those designed to manipulate people, suggesting ad-targeting might be more disruptive than the studies indicate. Although these three sources underscore the significance of the erosion of privacy rights, they show discrepancies in the relative importance of this erosion on a larger societal level.

In summary, concerns about tech companies’ exploitation of personal data are echoed through the challenges of implementing trustworthy AI systems, protecting privacy rights, and enacting regulatory measures. All three sources agree that companies frequently push ethical and legal boundaries to continue profiting from data, the advertisement targeting calls for improved regulations, and the EU is the current legislative vanguard in AI regulation. Nevertheless, the sources also contrast each other on the implications of violation of informational privacy, the factors that challenge AI systems’ trustworthiness, and the societal significance of erosion of privacy rights.

Works Cited

Johnson, Khari. “The Fight to Define When AI Is ‘High Risk.’” Wired. Web.

Manheim, Karl, and Lyric Kaplan. “Artificial Intelligence: Risks to Privacy and Democracy.” Yale Journal of Law and Technology, vol. 21, no. 1, 2019, pp. 106-185

Wang, Weiyu, and Keng Siau. “Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda.” Journal of Database Management, vol. 30, no. 1, 2019, pp. 61-79.

Cite this paper

Select style

Reference

StudyCorgi. (2023, August 16). Risks of Artificial Intelligence Data-Mining by Tech Corporations. https://studycorgi.com/risks-of-artificial-intelligence-data-mining-by-tech-corporations/

Work Cited

"Risks of Artificial Intelligence Data-Mining by Tech Corporations." StudyCorgi, 16 Aug. 2023, studycorgi.com/risks-of-artificial-intelligence-data-mining-by-tech-corporations/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2023) 'Risks of Artificial Intelligence Data-Mining by Tech Corporations'. 16 August.

1. StudyCorgi. "Risks of Artificial Intelligence Data-Mining by Tech Corporations." August 16, 2023. https://studycorgi.com/risks-of-artificial-intelligence-data-mining-by-tech-corporations/.


Bibliography


StudyCorgi. "Risks of Artificial Intelligence Data-Mining by Tech Corporations." August 16, 2023. https://studycorgi.com/risks-of-artificial-intelligence-data-mining-by-tech-corporations/.

References

StudyCorgi. 2023. "Risks of Artificial Intelligence Data-Mining by Tech Corporations." August 16, 2023. https://studycorgi.com/risks-of-artificial-intelligence-data-mining-by-tech-corporations/.

This paper, “Risks of Artificial Intelligence Data-Mining by Tech Corporations”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.