The selected technology is an artificial intelligence (AI) machine by the name of AlphaGo Zero. It is an evolution of previous well-known machines from the company Deep Mind which focuses on self-learning to play a popular Chinese strategic board game of Go. It has far surpassed the ability of any previous iterations of this AI, including the one that has beaten the human world champion, which makes it the most intelligent Go player in history (Kurzweil, 2017).
In addition, this technology is unique due to the learning techniques that it utilizes. Previous AI has relied on human input and continuous practice through playing human and machine players. AlphaGo Zero relies on the concept of tabula rasa which teaches itself without provided data, human guidance, or domain knowledge outside the basic game rules. Through a deep neural network, AlphaGo Zero is able to evolve using reinforcement learning by playing itself. This approach increases the speed and processing power that is used to make infinite amounts of choices while searching for the next move in the game (Silver et al., 2017).
The AlphaGo Zero technology was chosen because of its unique ability to self-develop independently. The existence of such capacity in an AI machine has far-reaching implications beyond the game of Go. Up to date, human input and guidance were necessary to create this level of self-reinforcement. However, the innovative technology has proven to be exponentially faster and smarter at learning than any predecessors without drastically changing in hardware due to this unique feature. Single network AI such as AlphaGo Zero is forced to create its own language, concepts, and logic that are so advanced, humans have trouble understanding how to it works which leaves a wide space for debate about the ethicality and application of such technology.
Ethical Issue
The AlphaGo Zero AI technology will have repercussion for the ethical issue of impact on human values. It is an examination of moral principles that would undoubtedly be shifted with the introduction of radically life-altering technology. This technology is based on informatics which has implications for various decision-making processes, which are independent of human input; therefore, devaluing human responsibility (Palm & Hansson, 2005).
When AI machines engage in learning, they build “value neural networks” which are used to create decision trees with various outcomes. As the technology became more widespread and adapted to various fields, it would be expected to learn to make vital decisions which humans will use as the basis for their rationale. However, at this point, it is impossible to create artificial intelligence capable of sophisticated tasks in an imperfect environment (Quach, 2017). This can create an overreliance of human value on machines leading to consideration of whether such decision-making processes are ethical.
The human value would be the most prominent issue due to the approach which is applied to self-learning in AlphaGo Zero. Competent AI development begins with design decisions which focus on openly expressing value properties by the creators (Dignum, 2017). This requires human input which DeepMind chose to avoid, with the design based on self-learning that is reliant on implicit procedures of decision-making eliminating any ethical guidelines. The misalignment of human values with artificial intelligence will result in a lack of acceptance from society and impede collaboration between the two parties. AI systems must be created with transparency, and there should be an understanding of the decision-making processes by humans in order to create trust and accountability (European Parliament, 2016).
The evolution of artificial intelligence has led to the consideration of various socio-economic concerns that impact human value. The automation of various process and their increased efficiency will force human workers to undertake more unpredictable and creative jobs that a machine cannot conceptualize. That essentially brings up the question of human worth and existence if the tasks that machines can do increases as AI begins to learn and most likely exceed biological counterparts. Furthermore, the concept of wealth will be compromised in our modern economy.
The distribution of wealth without jobs will most likely increase inequality in favor of those designing and controlling AI systems. In addition, basic human concepts such as interpersonal interaction may be affected. Technology is already changing the way that people behave and communicate. Artificial intelligence may result in a decline of human interaction in favor of machines, as they are able to invest unlimited resources into any relationship and predict the best possible response to please a human (Bossman, 2016).
All these socio-economic concepts will be significantly altered by self-learning AI, challenging moral codes and considerations of ethical behavior. Human moral values are inherently imperfect based on accountability for one’s actions rather than necessarily their prevention since a person’s decision-making process is much more complicated than a series of rights and wrongs. As a machine, artificial intelligence will always attempt to derive the perfect outcome in its decisions. This may not always align with the human morality that is subjective in theory, creating a conflict of values since instinct and conscience cannot be programmed.
Future Integration
AlphaGo Zero technology in the future can be developed into a powerful decision-making engine that is able to analyze information and derive solutions based on the process of self-learning and interaction with other systems. Interconnected with data banks, AI would be able to formulate strategy and action plans much more efficiently than humans. This has implications in a variety of fields. Clinical research would be conducted without the necessity to create expensive and lengthy studies since every outcome would be simulated. AI could analyze patterns that are useful in business and marketing, suggesting strategies which would be the most effective, eliminating human factors that may cause an error. Practically every field would benefit from residual machine learning that improves outcomes.
While it is a simple board game AI, AlphaGo Zero has no dangers. However, with more complex situations, a lack of human input creates a void on the ethical values which the machine will choose to adopt through self-learning. As there is further integration, human values may become blurred as the interexchange of ideas continues. AI will evolve to the point of being able to function and learn with imperfect data, similar to the thought process of humans.
This will make interactions more natural. However, the paradox is that machines might be able to learn to mimic human behavior or use manipulative features such as lying to achieve results. Since the technology is based on the concept of achieving the best result through the most efficient choices on a decision tree, it may learn that manipulative behavior is the fastest way to achieve a result. It creates a myriad of issues about compromised human values since AI will most likely be a significant part of daily life.
Even now, there is fear that in the next decades many jobs will be replaced by machines and automated processes. It is most likely that on a mass scale, the processes will be controlled by artificial intelligence and slowly eliminate any human value. However, humans can benefit by using this technology to derive improved decision-making systems. Through proper regulation, deep mind networks will be used solely for assistance and conduct rudimentary tasks without human input.
The technological ecosystems will be used to derive new methodology and can be used for the improvement of human progress. A lot of world problems form due to poor decisions made by humans without consideration of long-term consequences. If artificial intelligence is used to solve many of these problems, humans can focus on self-improvement and self-actualization which supports a re-examination of ethics and moral values.
As AI gets smarter, it will be the human responsibility to ensure that there is an alignment of values. The technology and systems should be designed with adherence to a set standard of values that should be developed for AI learning. Value-sensitive design allows developers to integrate ethical norms into smart the machine parameters. All designs are made in consideration of responsible autonomy which allows for human control, regulation, and moral agents which sets moral boundaries for reasoning and decision-making processes that the AI machine may choose to undertake.
As a result, there is accountability and transparency in the AI function that helps to mitigate the ethical issue of impact on human values (Dignum, 2017). Morality is dependent on the perspectives that are available to examine any given issue. Undoubtedly deep mind AI will introduce humanity to new concepts that will lead to ethical reevaluation.
References
Bossman, J. (2016). Top 9 ethical issues in artificial intelligence. Web.
Dignum, V. (2017). Responsible autonomy. Web.
European Parliament. (2016). Artificial intelligence: Potential benefits and ethical considerations. Web.
Kurzweil. (2017). AlphaGo Zero trains itself to be the most powerful Go player in the world. Web.
Palm, E., & Hansson, S.O. (2005). The case for ethical technology assessment (eTA). Technological Forecasting & Social Change, 73, 543-558. Web.
Quach, K. (2017). How DeepMind’s AlphaGo Zero learned all by itself to trash world champ AI AlphaGo. Web.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., … Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550, 354-359. Web.