Today, the creation of artificial intelligence is a goal that many researchers pursue. The consequences of success, in this case, have been the subject of many dystopian films and novels. I’ve always thought that the idea of creating artificial intelligence is very exciting. However, this week’s reading motivated me to think more about the possible problems of creating artificial intelligence, as well as of the ethical implications associated with the very process of its creation.
specifically for you
for only $16.05 $11/page
For instance, if we create artificial intelligence based on human intelligence, some of the less needed qualities will be omitted during the process of abstraction. This can potentially pose a threat to artificial intelligence not having the qualities that are considered humane, such as empathy, ethical reasoning, and religiousness. However, this is inevitable even if we let intelligent agents recreate their intelligence in other agents: the process of abstraction will be applied, again and again, thus reducing the presence of humane qualities in the created agents. Gradually, intelligent agents will become more intellectually advanced than humans, but, at the same time, their way of thinking will hardly be considered human.
This made me think about the threat that artificial intelligence can actually pose in the future. For example, could intelligent agents turn on humanity and start a war as we see it in the movies? Given that artificial agents will become more intellectually advanced than humans, would they consider us the lower race and take control over the world? At this stage, these questions seem ridiculous, but if we think about this, there is no guarantee that this will not happen. We’ve already seen the news of a robot with artificial intelligence escaping research facilities, even after reprogramming. Could this be the robot’s informed decision and not a technical fault? And what could we do in case robots pose a threat to our safety in the future? Of course, this is the worst-case scenario, but if we do implement the abstraction of human qualities, there will be nothing stopping the robots from becoming hostile in the future, whereas not abstracting humane qualities could obstruct the creation of artificial intelligence.
Nevertheless, if a threat from artificial intelligence does occur in the future, will it be ethical to destroy the creatures that are more intellectually advanced than we are? Our moral laws tell us not to kill other people, and it seems like destroying robots would not oppose that law. However, if we see intelligence as the determinant feature of humans and if robots become more intellectually advanced than us, destroying them would go against the basic moral laws of humans. Even in the case of the escaping robot, the probability that the research company would destroy it due to its ‘bad behavior’ created controversy all around the world. If people believe that killing intelligent robots is going against our ethical and moral values, does that mean that we will succumb to the power of artificial intelligence in the future?
Overall, this week’s reading caused me to feel differently about artificial intelligence and abstraction. I saw these concepts in a new light, both as opportunities for significant breakthroughs and a threat to the future of the human race in general. As terrifying as it might seem, I think that artificial intelligence might be the next step in the development of the universe, which would bring an end to the era of humans.