Service Learning Project: Ethics in A.I.

Introduction

Since computing technologies have become more rooted in human lives, the choice has been fixed upon Machine Intelligence Research Institute (MIRI). Although the members of the organization work on a formal tool for general-purpose AI systems, this can help understand how to develop reliable systems for various professional areas. Participation in this non-profit organization represents a valuable experience because the collaboration with the MIRI staff members supported the enhancement of the vision of ethics in AI. Besides, it helped to develop a consensus regarding the need for the involvement of non-technical experts to work on ethical aspects. My research in the MIRI suggests the necessity of further studying and addressing the issues of transparency, algorithmic bias, and privacy with respect to different professional environments.

Interview with the Representative of the MIRI

Before the beginning of my learning project on September 19, I took an interview with the representative of the Institute regarding the organization itself, AI, and its technical and ethical aspects. For confidentiality purposes, the name of the interviewee is changed to Julia Peterson. First, I asked her to whom the MIRI serves. Julia said that since the work of the Institute turned around the AI, it means that it serves people who will be end users of the technology. However, the problem is making AI available, usable, and safe for others, which is why the MIRI employees do what they do. Much information about AI is still unknown because none can accurately predict what effect this technology may bring to the world.

Further, the conversation touched on the topic of what AI is and how it works. According to Julia, AI represents the simulated human intelligence processed by a computer system. AI includes speech recognition, processing of natural language, machine vision, and expert systems. Nevertheless, AI has become a buzzword for promoting many products and services, which is usually limited to machine learning. On the contrary, AI systems are more complex and require many components and processes to work. Such a system has to process huge volumes of specific training data, analyze patterns and correlations that occur in these processes, and employ the obtained data to make a prognosis about the further system states. Besides, AI is not identical to any other programming language, which requires specialized hardware and software.

As for the technical aspects of implementing AI into devices in the future, Julia told that it still requires much time because creating a reliable system is a difficult challenge. The MIRI works on developing smarter-the-human systems, but the alignment of such a system to human values represents a major problem today. Even the advanced AI system encounters multiple problems, but there is a lack of understanding of the causes and finding solutions to respond to shortcomings. Besides, the ways to delegate the work safely to the machine are currently studied. Since the system is designed to be smarter than humans, the possibility of deception or manipulation from the side of the machine is not excluded.

Finally, Julia said that the ethical aspects of designing AI systems do not significantly differ from those raised in creating technological products and are linked to social requirements. Transparency or predictability of the machine is one of the key challenges because the technology constantly improves, as well as the environment around it. However, with these changes, it is complicated to control the system, but the MIRI staff is trying to find a solution for this issue. Another major ethical problem is the responsibility that arises when the AI fails to perform a particular task correctly. Here, it becomes unclear who can be guilty in such a situation, whether developers or users. Therefore, MIRI programmers constantly try to consider all possible criteria while creating algorithms.

Description of Machine Intelligence Research Institute

Machine Intelligence Research Institute is a non-profit research organization located in Berkeley, California, and specializes in studying possible existential risks of using artificial intelligence. The mission of the Institute is to make a positive impact by creating highly smart and reliable intelligent systems. The most complicated task in this field is to do a computing tool work as intended and without constant human supervision. Nevertheless, every organization has its advantages and weak points that facilitate and complicate the work, respectively.

SWOT Analysis

Strengths

First, MIRI has a powerful research base that is constantly growing and enriching an understanding of how AI works and how it should work. The abundance of articles that deal with various technical aspects of AI suggests that this field develops uninterruptedly. It also shows that AI has the potential to become an integral part of people’s lives. It means that smart systems are the subject of interdisciplinary attention and can be applied in different industries and areas devoted to the improvement of overall well-being.

Second, cooperation with other technical experts from different universities and organizations, such as the University of Oxford and the University of Cambridge, to name a few well-known. Such collaborations demonstrate the global interest in the issue of AI and the involvement of professionals in computing science. It helps not only to consider AI from the perspective of technological efficiency but also makes MIRI one of the leading organizations in AI research.

Striving to the alignment of AI with human interests is one of the strategic priorities and the third strength of the Institute. MIRI focuses on how to make AI more friendly to people and safe from potential risks. Thus, the experts from the organization adhere to error tolerance and value learning principles that help to improve the technology. When developing the AI system, it is crucial to ensure it learns human values and perceives them as its own.

Weaknesses

Some unpredictable situations make it impossible to fit into the estimated budget. In 2020, the Institute incurred more expenses than planned because of the COVID-19 pandemic (Bourgon, 2020). The MIRI needed to spend a part of the budget on staff relocation (Bourgon, 2020). Besides, there was a need to adjust the strategy in response to existing and potential challenges in the following years (Bourgon, 2020). On the one hand, the occurrence of pandemics now makes the staff take into account unforeseen circumstances in the planning of activities. On the other hand, none knows what scale the next challenge may acquire.

The next weakness relates to the tendency to create AI systems from scratch. The first aspect of the problem is transferring human brain algorithms into an existing software system. However, this is almost unreal to achieve, as the work of the brain is not studied to the extent allowing for such experiments. The second part of the issue is the assumption that an AI system is more controllable when it has some components built anew. Here, unpredictable expenses for component materials may lead to a lack of budget for other important projects. In this way, the work progress may slow down or even stop.

Another problem of the Institute is its high focus on the technical side of the AI problem. A potential staff member has to be proficient in computer science, programming, and mathematics. However, such a narrow specialization does not allow thinking continentally about the impact of AI on the world. In view of this, programmers may neglect the questions regarding the safety of users or specifics of using AI in narrow contexts, such as education, employment, health care, and so forth. Therefore, the MIRI technicians may not be able to solve specific-field questions without relevant professional advice.

Opportunities

The MIRI gives a chance to all people around the globe to be involved in creating the future. The involvement of new generations may help to extend strategic approaches to the study of AI. Although the Institute offers volunteering and full-time jobs, as well as assistance with immigration and visas for foreign experts, the involvement of non-technical professionals would be a valuable contribution to boosting interdisciplinary research. Since the organization strives to align the AI system with human values, it is necessary to understand how the values work in different areas. Therefore, engaging experts in psychology, cognitive science, and other fields related to human interaction may represent a significant opportunity to achieve decent progress in AI research.

In addition, the MIRI can shift its focus on interdisciplinary activities to raise funds. People already donate money to the organization, but, as it has been mentioned above, the budget may not be stable. Interdisciplinary research would attract more donators because the combined research is possible to draw the attention of people that are not interested in technical matters. At the same time, the Institute will need to balance between technical and non-technical sides, which requires developing a relevant organizational strategy.

Threats

One of the main threats is the human factor because it may lead to drastic consequences. If a staff member has an overwhelming desire to take part in a project but lacks the knowledge of why AI works, it can be dangerous. While planning an AI system, the developer has to identify clearly what he or she wants that the system should do. Moreover, prejudices that programmers may have regarding race or gender issues can significantly affect their perception and way of thinking. In turn, this can result in an incorrect formulation of the values that should be further incorporated into the system. Thus, one mistake is able to deteriorate the whole system and make it unreliable. Finally, this situation jeopardizes using AI technology in various fields.

Another threat may be posed by the lack of experts that share the alignment mindset. The MIRI has strict criteria for selecting the staff and thus risks having a limited pool of technical professionals. The Institute seems to strive to find a perfect candidate who will follow all the rules straight away. The emerging problem is that people with the necessary mindset may have insufficient technical skills. Inversely, those possessing comprehensive technical knowledge may neglect some alignment tenets. On the one hand, appropriate characteristics and traits of staff members can prevent dangerous mistakes. On the other hand, the Institute will have to spend more budget if they wanted technicians to turn out to be foreigners. Implementing specific programs that teach alignment principles will also require costs. Therefore, the MIRI might need to prioritize its strategic approaches to hiring to attract more potential experts.

Learning during the Experience

AI and ML technologies are used to solve complex and long-term issues. Therefore, it is crucial to recognize that realistic goals are difficult to implement in AI systems due to numerous potential ethical issues. On September 19, I began researching the ethical aspects of using AI technologies in different industries at the Institute. The study aimed to help the MIRI staff complement their vision of the role of ethics in AI and develop a consensus regarding the involvement of non-technical experts to address field-specific ethical problems of using AI. Besides, I formed a collaboration with the MIRI staff members, and they explained to me the technical peculiarities of AI at different stages of the research. Namely, they demonstrated that the environmental model is one of the key points of AI design.

During the research, I found that the AI-associated ethical challenges are almost the same that Julia described in the interview. They include transparency and safety, fairness and bias of algorithms, data privacy, and informed consent for using data (Naik et al., 2022). Transparency comprises two aspects, namely, comprehensibility and accessibility of data (Naik et al., 2022). Besides, the transparency of algorithms raises many concerns about whether it will be legal to use AI systems (Naik et al., 2022). Indeed, machines can work according to unclear rules and acquire new behavioral traits, which disrupts the legal concept of liability (Naik et al., 2022). Therefore, the question emerges whether it is possible to design a system that will learn appropriate and non-injurious patterns.

Using AI in legal and criminal justice systems may face obstacles. In criminal justice, risk assessment is used to determine whether a particular defendant is likely to commit an offense in the future (Yapo & Weiss, 2018). The investigations show that AI-driven risk assessment systems labeled white people as posing a lower risk of recidivism and blacks as posing a higher risk (Yapo & Weiss, 2018). In reality, white citizens turned out to commit crimes repeatedly more compared to black people (Yapo & Weiss, 2018). Noteworthy, the AI system should not have considered race if it has to evaluate the recidivism risk of a particular offender. Such a situation suggests that engineers and programmers may unconsciously transfer their values and prejudices to the machine.

AI technology can be helpful in such practices as hiring, though the machine can be dangerous in terms of data privacy and informed consent to use it. Big data gathers user information that can be considered private as it can be related, for example, to the health condition of a family member (Dattner et al., 2019). The problem is that employers may use the AI system to predict whether an applicant fits into a certain position, which can lead to discrimination (Dattner et al., 2019). It suggests that potential employees are threatened with the misuse of sensitive information because there is no specific legal mechanism related to technology (Dattner et al., 2019). Therefore, the implementation of the AI system poses a high risk to user privacy.

Proposed Improvements in the Organization

The MIRI has an experienced staff and constantly strives to enrich itself with young experts that would contribute to the development of AI technologies. However, there is an area of concern related to the lack of professionals in other fields, such as psychological, legal, medical, e-commerce, education, and so forth. Since AI is supposed to replace humans in various activity areas, it would be more reasonable to plan the machine design by relying on the advice of experts in particular fields. They can see potential flaws or problems pertaining to specific industries or services.

However, the largest challenge is related to the legal regulation of using AI. In view of this, the MIRI needs to develop cooperation with lawyers who can guide the AI research process in ethical and legal matters. Moreover, learning how AI works would help legal professionals develop the relevant framework for implementing and using this technology to protect user privacy. In turn, this can help the programmers to create a relevant AI design that will properly select specific algorithms which correspond to a particular situation.

Serving the Organization in the Future

The MIRI is an organization with huge potential and a responsible mission. Serving this organization in the future would be interesting to students of technical majors and those from non-technical ones. The students learning computing technologies can find various opportunities to learn how AI works and how to improve its performance in technical terms. For instance, the alignment of the machine to human values remains the basic subject of research. Another unknown area is teaching the AI system to respond to situations that are not preliminarily programmed. In addition, one can devote their research to causality in AI systems since the machine does not have an idea of cause and consequence, which may lead to operative failure. Therefore, the students and experts in computer science can make a valuable contribution to the MIRI.

Non-technicians can be involved in studying the impact of AI in their professional areas. Besides, they can research what other products or services may be advanced with AI. Nevertheless, one can talk not only about how to improve the educational process or attract new customers with the help of AI but also help in creating an aligned system. For example, psychologists, sociologists, and cognitive scientists can facilitate the understanding of how to design the AI system based on the research of the human brain and values in different cultures. As one can see, the development of AI requires an interdisciplinary approach, which means that this subject would be interesting for everyone who recognizes the connection between technologies and life.

Conclusion

In conclusion, the MIRI is an organization with a promising future because AI is supposed to play a significant role in the world. Volunteering in the Institute brings priceless experience as it allows seeing and participating in the process of creating a technology that may change people’s lives substantially. Nevertheless, AI still needs thorough research in terms of privacy, algorithmic bias, and transparency because MIRI programmers cannot predict how AI will work. This issue is closely related to the ethical problems which obstruct using the technology to the full extent. Besides, it has been recommended that the Institute involves more experts from the non-technical fields to be able to address the problems that technicians can unintentionally dismiss.

References

Bourgon, M. (2020). 2020 updates and strategy. Machine Intelligence Research Institute. Web.

Dattner, B., Chamorro-Premuzic, T., Buchband R., & Schettler L. (2019). The legal and ethical implications of using AI in hiring. Harvard Business Review, pp. 1-7.

Naik, N., Hameed, B. M. Z., Shetty, D. K., Swain, D., Shah, M., Paul, R., Aggarwal, K., Ibrahim, S., Patil, V., Smriti, K., Shetty, S., Rai, B. P., Chlosta, P., & Somani, B. K. (2022). Legal and ethical consideration in artificial intelligence in healthcare: Who takes responsibility? Frontiers in Surgery, 9(862322), pp. 1-6.

Yapo A., & Weiss J. (2018). Ethical implications of bias in machine learning. Proceedings of the 51st Hawaii International Conference on System Sciences, pp. 5365-5372. Web.

Cite this paper

Select style

Reference

StudyCorgi. (2023, July 23). Service Learning Project: Ethics in A.I. https://studycorgi.com/service-learning-project-ethics-in-a-i/

Work Cited

"Service Learning Project: Ethics in A.I." StudyCorgi, 23 July 2023, studycorgi.com/service-learning-project-ethics-in-a-i/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2023) 'Service Learning Project: Ethics in A.I'. 23 July.

1. StudyCorgi. "Service Learning Project: Ethics in A.I." July 23, 2023. https://studycorgi.com/service-learning-project-ethics-in-a-i/.


Bibliography


StudyCorgi. "Service Learning Project: Ethics in A.I." July 23, 2023. https://studycorgi.com/service-learning-project-ethics-in-a-i/.

References

StudyCorgi. 2023. "Service Learning Project: Ethics in A.I." July 23, 2023. https://studycorgi.com/service-learning-project-ethics-in-a-i/.

This paper, “Service Learning Project: Ethics in A.I.”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.