Background
Artificial intelligence (AI) has brought innovation to many aspects of life by making specific tasks much more accessible. Implementing this tool in various areas has made it possible to speed up many processes and make work faster and more efficient by quickly studying source data and making decisions. This approach works, for example, in some cases with medical problems when AI accurately determines the diagnosis.
However, using this technology can be fraught with specific difficulties; since the AI is trained based on answers and questions, the algorithms may be misconfigured or produce opposite answers. Machine learning cannot utterly abstract from all biases and discriminatory influences, which creates the threat of creating incorrect algorithms. Studying discriminatory bias in AI is necessary to develop strategies to mitigate the adverse effects.
Method
The central aspect of the work is data collection, which was carried out in several ways. The study of literature is the main element that helped to better understand the problem by studying other studies. Ferrer et al. (2021) examined automated decisions affecting many people. These tests provided insight into how discriminatory policies can disadvantage or directly harm people (Ferrer et al., 2021; Mehrabi et al., 2021). Collecting information in this way helped to determine that such situations occur.
In addition, it was also essential to find out which areas are most often subject to discrimination. Cirillo et al. (2020) state that bias occurs most frequently in the healthcare sector. Thus, researching sources significantly helps clarify the key features of developing biases and incorrect statements in AI. The study provided a basis for exploring how discrimination and bias in AI can negatively impact society.
A critical aspect of the methodology is the consideration of such elements as decision trees, support vector machines, and neural networks. They can be used individually and make up all decisions made by the AI. Depending on the algorithm, it was necessary to analyze data on the work of each direction. It is essential to understand all the directions of the work of AI and the creation of individual conclusions from it (Mehrabi et al., 2021). Studying these algorithms gave me an idea of how decisions are made and what influences them.
Demographic decisions made by AI could be due to several features that significantly shift the results and prevent the display of more adequate data. The presence of discriminatory behavior was also analyzed using ethical considerations and statistical analysis. This made it possible to obtain more accurate data regarding how the AI produces answers based on previously received training materials.
Results
Conducting detailed research into AI decision-making has led to insights into the emergence of biases and discriminatory phenomena. One of the significant results is the presence of bias in many cases characterized by different demographic groups. A significant bias layer comes to light when discussing human resource management and hiring. In this area, AI expresses certain biases that can be traced about the work performed by different people (Mehrabi et al., 2021). This trend was most often observed among medical workers and nurses. Preconceptions were rather weakly formed, and it was necessary to ask leading questions to obtain detailed answers.
Such communication led to the fact that neural networks could generate answers that were not wholly politically correct and did not meet tolerant standards. When parsing documents of people applying for a particular position, AI may sort their files incorrectly, resulting in some misunderstandings of the results and their further distortion. Human resource management should be optimized as much as possible in organizations, and thus, AI cannot currently perform extensive automation and document processing tasks (Burlina et al., 2021). This otherwise leads to incorrect sorting, which can significantly harm the overall productivity of enterprises.
A significant result is that when implementing AI algorithms in any area, achieving a high level of variability in the algorithms is necessary so that all tasks can be performed efficiently and correctly. Without observing the proper rules and regulations necessary for human-to-human communication, AI cannot be fully implemented in various systems (Hoffman & Podgurski, 2019). This, in turn, creates additional costs and optimization problems. The results of the study confirm that it will be impossible to fully use AI tools since they currently have severe problems with discrimination, which manifests itself in various fields of activity and can negatively affect the results of the algorithms.
Discussion
The study’s results demonstrated specific dual parameters that indicate a relatively strong violation of AI algorithms and a tendency towards discrimination and the acquisition of certain biases towards any demographic categories of the population. This serious issue must be addressed to determine how AI can develop (Burlina et al., 2021). In addition, implementing this tool in any aspect of the activity can cause significant harm if it is done without sufficient supervision of the operation of this tool.
AI algorithms and tools can be biased in all critical areas of life and significantly affect how functions will be performed in companies that have applied AI. The study results confirm that AI can exhibit specific discriminatory behavioral patterns in simulating communication or issuing job recommendations. This has become noticeable in the healthcare sector, where discredited decisions by AI algorithms can take on a negative connotation and, as a result, harm people in performing their duties and caring for patients.
On the one hand, using unique algorithms can positively impact the optimization and improvement of certain areas of business. This could lead to improved productivity and increased automation by replacing people. However, despite this, existing concerns cannot allow the full implementation of this tool in test mode (Hoffman & Podgurski, 2019). Suppose people do not constantly monitor the work of AI algorithms.
In that case, this can lead to negative consequences that will manifest themselves over a certain period, showing how effective the use of modern tools for analyzing and redistributing data can be. Continued inappropriate use of AI could set back progress in eliminating discrimination, with consequences for the entire society. Since many people use AI tools daily, this can have a disastrous effect and impose wrong thinking on people.
References
Burlina, P., Joshi, N., Paul, W., Pacheco, K. D., & Bressler, N. M. (2021). Addressing artificial intelligence bias in retinal diagnostics. Translational Vision Science & Technology, 10(2), 13-13. Web.
Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S., Gigante, A., Valencia, A., Rementeria, M., Chadha, A. & Mavridis, N. (2020). Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ digital medicine, 3(1), 81. Web.
Ferrer, X., van Nuenen, T., Such, J. M., Coté, M., & Criado, N. (2021). Bias and discrimination in AI: a cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72-80. Web.
Hoffman, S., & Podgurski, A. (2019). Artificial intelligence and discrimination in health care. Yale J. Health Pol’y L. & Ethics, 19, 1. Web.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6), 1-35. Web.