Introduction
Over the past couple of decades, there has been marked progress in exploring digital technologies and the opportunities they provide. Integrating artificial intelligence (AI) into various contexts for automation purposes has been one of the key trends in many industries in recent years (Hernandez-Boussard et al., 2013). However, the described innovation may cause more harm than good due to the lack of control over the tools in question. Since humans create AI, the threat of biases is quite tangible; however, it increases exponentially in AI once deployed in a public context due to the lack of opportunity to introduce ethics into the AI framework.
Issue Discussion
The problem of AI being biased in assessing certain information has been affecting the quality of data processing and the resulting functioning of the relevant systems and industries. For instance, studies by Akter et al. (21), DeCamp and Lindvall (2021), and Bouschery et al. (141) confirm that the problem of racial bias is quite tangible in AI-powered technologies. Similarly, the research by McLennan et al. (21) emphasizes that the effort to introduce AI to the notion of colorblindness leads to further enforcement of prejudice against racial and ethnic minorities.
Acknowledging the opposing perspective, one should also mention that AI is constantly learning and evolving, which suggests that it can avoid racial bias in the future. However, the learning process will inevitably be contaminated by prejudices and biases in people developing AI. Therefore, additional efforts must be taken to examine the methods of minimizing racial biases in AI.
Conclusion
Since AI tools cannot be imbued with ethical standards and principles, they are doomed to incorporate biases into their systems due to the imperfections of the people creating them. As a result, the inclusion of AI-powered technologies for addressing public concerns and social issues does not seem to be a reasonable step. At the same time, the opportunities for automation that AI provides call for further examination of the problem and the search for an available solution.
Works Cited
Akter, Shahriar, et al. “Addressing Algorithmic Bias in AI-driven Customer Management.” Journal of Global Information Management (JGIM), vol. 29, no. 6, 2021, pp. 1-27. Web.
Bouschery, Sebastian G., Vera Blazevic, and Frank T. Piller. “Augmenting Human Innovation Teams with Artificial Intelligence: Exploring Transformer‐Based Language Models.” Journal of Product Innovation Management, vol. 40, no. 2, 2023, pp. 139-153.
DeCamp, Matthew, and Charlotta Lindvall. “Latent Bias and the Implementation of Artificial Intelligence in Medicine.” Journal of the American Medical Informatics Association, vol. 27, no. 12, 2020, pp. 2020-2023. Web.
Hernandez-Boussard, Tina, et al. “MINIMAR (MINimum Information for Medical AI Reporting): Developing Reporting Standards for Artificial Intelligence in Health Care.” Journal of the American Medical Informatics Association, vol. 27, no. 12, 2020, pp. 2011-2015. Web.
McLennan, Stuart, et al. “AI Ethics Is Not a Panacea.” The American Journal of Bioethics, vol. 20, no. 11, 2020, pp. 20-22. Web.