Introduction
Artificial intelligence (AI) is driving organizational change, productivity, and innovation. According to Al Mansoori, Salloum, and Shaalan (2021), AI comprises computational technologies that execute roles involving human interpretation and decision-making. Apart from reducing costs, AI potentially establishes the efficacy of systems and procedures. AI-based techniques take an interdisciplinary approach such that they are applicable in different fields, like health and medicine. Advances and interest in AI applications in healthcare have surged due to rapid technological advancements and the vast amount of patient data available for utilization (Davenport and Kalakota, 2019; Reddy et al., 2020). Therefore, AI is changing clinical practices across various fields, including radiology, diagnostic, surgical, and rehabilitative. Other critical areas impacted by AI is disease diagnosis and clinical decision-making. In essence, AI technologies can analyze and report large volumes of patient data to detect underlying health problems and guide clinical decisions. McGinty and Lylova (2020) revealed that AI applications deal with data that would otherwise remain hidden. Additionally, AI technologies can identify new treatment plans for patient and health services management. However, AI requires high overall costs and can potentially result in biases and inequalities. Aligning AI with organizational systems can help navigate the provision of actionable, accurate, and impactful data.
AI in Management Theory
Scientific Management Theory
The fundamental aspect of management is securing the maximum prosperity for business stakeholders. In this, Fredrick Taylor developed the scientific management theory to propose the increase of productivity by simplifying and optimizing work. Taylor argued that organizations and their employees should strive to achieve optimal results and get compensated for their efforts (Dar, 2022). For employees, the approach would imply that they potentially benefit from better work conditions, shorter working hours, and higher wages. Taylor argued that employees should focus on labor in repose to insufficient capacity, while senior management should optimize performance (Dar, 2022; Sulieman, 2019). The essence of contemporary business, as presented by Taylor, is producing maximum efficiency. Specifically, the scientific management theory addresses employees’ concern that greater productivity results in job loss, defective management systems that work against employees and productivity, and replacing inefficient procedures (Merkle, 2022). Therefore, in implementing AI in the workplace, an organization should develop a science for its work elements, scientifically select, train, and develop its staff, cooperate with employees at all levels, and responsibly divide the work.
Systems Theory
Organizations are essentially affected by the internal and external environment. Teece (2018) asserted that most successful organizations adapt to their environments though at different levels. For example, companies existing in dynamic environments have open systems to grow. In addition, organizations organize and process information to formulate innovative solutions that strategically fit with company goals and values. The systems theory examines the functions of complex organizations, including inter-relations in AI functions such as directing and controlling operations (Hagendorff, 2020). However, the theory may be difficult to apply to complex organizations and may fail to specify the nature of interdependencies. Thus, AI can be viewed as a system constituting transformation, feedback, inputs, and outputs. Input can include data and artificial neural sub-systems, while learning and analysis fall under transformation. Output constitutes insights such that the feedback loop conveys and triggers critical essential adjustments. Therefore, organizations function as open systems comprising interdependent and interrelated sections, which interact as sub-systems.
Significance and the Impacts of AI
The dynamic and changing nature of the contemporary business landscape necessitates innovative solutions. Schwalbe and Wahl (2020) contend that the integration of AI into data processing and imaging continues to transform the healthcare industry and will continue to have substantial growth due to the prevalence of technological advancements. Apart from bringing about accurate disease diagnosis, AI improves the precision of treatment plans and guides decision-making. AI provides health professionals with data used to make sound decisions. An example of recent cutting-edge technology in AI is the application of semantic segmentation to detect small objects and their abnormalities to assist in early disease detection in patients. More diagnostic accuracy optimizes results, promoting early and effective treatment. In addition, AI technology automates drug discovery and identification of targets (Bohr and Memarzadeh, 2020; Davenport and Kalakota, 2019). Subsequently, this streamlines the process of finding the most effective treatments. For example, Genentech relies on an AI-based system from GNS Healthcare to aid in its research in cancer treatments. AI invites an era of more effective, cheaper, and faster drug development.
Furthermore, AI creates systems that aid patient care, from genetics to healthcare robotics. For example, exoskeleton robots assist paralyzed patients to walk and become self-sufficient. Another example is a smart prosthesis, which attaches sensors to make them more responsive, with the option of connecting them to a patient’s muscles. Robots like Cyberdyne’s Hybrid Assistive Limb (HAL) exoskeleton assist in surgery and rehabilitation by using sensors to detect electrical signals in one’s body (Miura et al., 2021). Additionally, patients are increasingly getting involved in their treatment, from the data in their activity tracker to genome sequencing. Denny and Collins (2021) postulated that genetic AI data-driven medicine improves the agility and accuracy of genetic disease detection and allows for individualized treatments. Generally, AI has allowed health professionals to discover illness patterns from clinical data.
Nonetheless, organizations face both opportunities and obstacles due to AI. Potential risks include medical errors, increased health inequalities, patient harm, data privacy breaches, and lack of transparency. AI systems may provide inaccurate data, resulting in patient injury and other health-related problems (Gerke, Minssen, and Cohen, 2020). For instance, when an AI system recommends the wrong treatment dosage and fails to notice a tumor during radiological scanning, a patient’s life could be at risk. Most injuries result from medical errors in the healthcare system even without the use of automation. However, AI errors are different since providers and patients react differently to injuries resulting from AI-driven systems compared to human error (Noguerol et al., 2019). Additionally, a single error in an AI system could potentially result in massive injuries. Training AI systems demand data from sources like insurance claims records, electronic health records, consumer-generated information, or pharmacy records. Medical data is often fragmented across departments and systems. The data split into several formats and systems increases the risk of error and decreases data comprehension.
Another set of risks involves bias, inequality, and privacy concerns. Forcier et al. (2019) and Rickert J. (2020). implied that the need for large datasets creates incentives for third parties to collect patient data and violates privacy. Another way AI implicates privacy is by predicting private patient information even without the algorithm’s permission to proceed. For example, an AI system might identify that a patient has post-traumatic stress disorder (PTSD) based on certain symptoms, even without the person revealing that information to their care provider. In terms of bias and inequality, AI systems depend on specific daa and can incorporate biases (DeCamp and Lindvall, 2020). Consider data gathered in medical research centers principally generated by AI may underrepresent actual figures, thus underlying inequalities and biases in the healthcare system.operations
Recommendations
AI present several opportunities and threats, thus, organizations, should consider some factors before adopting AI systems to drive operation. For one, the development of innovative solutions and techniques as a standard of efficient care can provide safeguards against the use of poorly validated AI systems. Replacing established procedures and processes in healthcare with AI diagnostics requires more validation. Given this reason, health institutions should support work to prepare their staff members for promising AI applications for clinical practice. The creation of validation approaches is, thus, critical to evaluating the performance of AI under different training sets (Forcier et al., 2019). The potential for misinformation proliferation that could potentially impede the adoption of AI systems warrants the adoption of transparent policies to support the engagement of organizations to endorse best AI practices in health. Noguerol et al. (2019) concurred that supporting the development of these enables the adoption of AI for healthcare delivery and community and public health. Institutions that support AI development have demonstrated their value, which encourages the creation of large amounts of data for further research.
Moreover, AI applications require training data to prevent poor performance in the absence of significant data streams. For example, health outcomes are significantly affected by social and environmental factors. The imbalance in capturing diverse data sets needed for AI applications with information on particular systems necessitates tracking programs, which align with the AI functionality and IT capabilities, and protocols of collecting and integrating diverse data (Gerke, Minssen, and Cohen, 2020; Reddy et al., 2020). Access to quality data is critical in implementing AI applications based on quality training sets and performance. Therefore, organizations should support access to research databases for the development of AI systems and create an efficient data paradigm.
Revolutionary changes in health care are taking place with the use of AI to monitor individual health. Most of these developments take place outside of clinical settings. AI will become increasingly become interdependent in various fields, including health. For example, AI may be used to create health-related mobile monitoring devices, which create massive datasets that open several possibilities in the research and development of healthcare tools. For this reason, organizations should support the development of AI applications and data infrastructure to integrate information from smart devices to ensure the transparency and privacy of AI applications.
Conclusion
Artificial intelligence (AI) is becoming prevalent in modern organizations and everyday life. The application of AI is useful across all industries, but the applied strategies can differ. In particular, the progress of AI in healthcare has the potential to benefit patients and providers in several ways, including administrative tasks and treatment plans. However, while AI can improve performance and productivity, it can increase inequalities, and bias, data breach, and inaccurate information. Combining human evaluation with machine learning, centralizing AI applications, and developing a flexible methodology for AI integration with organizational systems can improve the effectiveness of AI deployment. The potential for AI in organizations is extensive and will continue to grow with innovations.
Reference list
Al Mansoori, S., Salloum, S.A. and Shaalan, K. (2021). ‘The impact of artificial intelligence and information technologies on the efficiency of knowledge management at modern organizations: A systematic review.’ Recent advances in intelligent systems and smart applications, pp.163-182. Web.
Bohr, A., and Memarzadeh, K. (2020). ‘The rise of artificial intelligence in healthcare applications.’ Artificial Intelligence in Healthcare, 25–60. Web.
Dar, S. A. (2022). ‘The relevance of Taylor’s scientific management in the modern era.’ Web.
Davenport, T., and Kalakota, R. (2019). ‘The potential for artificial intelligence in healthcare.’ Future healthcare journal, 6(2), 94–98. Web.
DeCamp, M., and Lindvall, C. (2020). ‘Latent bias and the implementation of artificial intelligence in medicine.’ Journal of the American Medical Informatics Association: JAMIA, 27(12), 2020–2023. Web.
Denny, J. C., and Collins, F. S. (2021). ‘Precision medicine in 2030—seven ways to transform healthcare.’ Cell, 184(6), 1415-1419.
Forcier, M. B. et al. (2019). ‘Integrating artificial intelligence into health care through data access: Can the GDPR act as a beacon for policymakers?’ Journal of law and the biosciences, 6(1), 317–335. Web.
Gerke, S., Minssen, T., and Cohen, G. (2020). ‘Ethical and legal challenges of artificial intelligence-driven healthcare.’ Artificial Intelligence in Healthcare, 295–336. Web.
Hagendorff, T. (2020). ‘The ethics of AI ethics: An evaluation of guidelines.’ Minds and Machines, 30(1), 99-120. Web.
McGinty, N. A., and Lylova, E. V. (2020). ‘Transformation of the HR management in modern organizations.’ In 1st International Conference on Emerging Trends and Challenges in the Management Theory and Practice (ETCMTP 2019) (pp. 18-21). Atlantis Press. Web.
Merkle, J. A. (2022). ‘Management and ideology: The legacy of the international scientific management movement.’ Univ of California Press.
Miura, K. et al. (2021). ‘Successful use of the hybrid assistive limb for care support to reduce lumbar load in a simulated patient transfer.’ Asian spine journal, 15(1), 40–45. Web.
Noguerol, T. M. et al. (2019). ‘Strengths, weaknesses, opportunities, and threats analysis of artificial intelligence and machine learning applications in radiology.’ Journal of the American College of Radiology, 16(9), 1239-1247. Web.
Reddy, S. et al. (2020). ‘A governance model for the application of AI in health care.’ Journal of the American Medical Informatics Association: JAMIA, 27(3), 491–497. Web.
Rickert J. (2020). ‘On patient safety: The lure of artificial intelligence-are we jeopardizing our patients’ privacy?’ Clinical orthopaedics and related research, 478(4), 712–714. Web.
Schwalbe, N., and Wahl, B. (2020). ‘Artificial intelligence and the future of global health.’ Lancet (London, England), 395(10236), 1579–1586. Web.
Sulieman, M. S. (2019). ‘Roots of organizational knowledge in classical management theories: A literature review.’ International Journal of Business and Social Science, 10(10), 8-15. Web.
Teece, D. J. (2018). ‘Dynamic capabilities as (workable) management systems theory.’ Journal of Management & Organization, 24(3), 359-368. Web.