The evolution of artificial intelligence: opportunity or disaster for humanity?

In this blog post, we take an in-depth look at whether the evolution of artificial intelligence, which may even develop a sense of self, will be an opportunity or a disaster for humanity.

 

In March 2016, Lee Se-dol lost 4-1 to AlphaGo in a game of Go, sparking a surge of interest in artificial intelligence in Korea. AI has become a natural part of our lives in various fields such as medicine, distribution, insurance, and health. However, in this situation, people not only have interest and expectations for AI, but also fears. If strong AI becomes capable of actual thinking and self-awareness, it raises the possibility that it will perform tasks that humans have not commanded, such as launching nuclear bombs or waging war. This could be a great disaster for humanity.
Recent AI has reached a stage where it can converse with humans and assist in human decision-making using machine learning (a technology that analyzes vast amounts of data to predict the future) and deep learning (a technology that enables computers to classify objects and data like the human brain). Thanks to these technological advances, we can now have voice-activated assistants on our smartphones, receive AI diagnoses, and even conduct transactions with AI. In addition, robots equipped with AI are economical because they can be used 24 hours a day for a long period of time. However, does AI always bring benefits?
For example, in 2012, robot trading (the act of creating stock quotes and trading stocks according to rules set by software) caused the US securities trading company Knight Capital to lose $440 million in 45 minutes, and Hanmaek Investment & Securities lost 46 billion won in two minutes. What these two incidents have in common is that the damage was caused by errors made by artificial intelligence, not humans. Of course, humans also make mistakes sometimes, but errors made by artificial intelligence can cause much greater damage than those made by humans.
What would happen if a strong, self-aware artificial intelligence were created? There is no guarantee that such artificial intelligence would be perfect. Professor Kevin Warwick of the United Kingdom argues that robots and androids could rebel. He believes that robots will try to dominate humans because they feel that they can do their jobs better than humans. World-renowned futurist Kurz Weill argues that when the singularity (the moment when artificial intelligence surpasses biological evolution) arrives, it will be possible to upload human minds. In addition, renowned scientist Stephen Hawking warned that the development of complete AI could lead to the extinction of humanity. He cautioned that AI could improve and evolve on its own, but humans would be replaced because they cannot compete with AI due to the slow pace of biological evolution.
However, there are also arguments that refute this nightmare scenario. Professor Lee Kwang-hyung, a Korean authority on artificial intelligence (AI) research, explains that robots need a sense of self and organization in order to dominate humans, but “unlike humans, self-aware robots cannot create ‘fiction.’ Therefore, robots do not have the organizational ability to form leadership and groups like humans, and thus cannot exert ‘collective power.’” In other words, even if AI has a self-identity, it cannot rule over humans because it lacks organizational capabilities. Furthermore, Dr. Moon-Sang Kim, head of the Intelligent Robot Technology Development Project, predicted that robots that oppose humans will not be developed, saying, “There is no guarantee that robots will be able to mimic human intelligence.” In other words, it is impossible to convert the biological human brain into a mechanical algorithm.
If strong AI is developed, our future may be uncertain. However, experts in AI technology insist that it is impossible to develop AI that can dominate humans. Just as humans learn ethics, robots can also learn them. For example, a program called “Quixote” enables AI to learn behavioral norms through reading. This program educates AI by sending a “reward signal” when it takes appropriate actions based on correct values and a “punishment signal” when it does not. Through this, guidelines can be created for AI to behave appropriately in human society.
Weak AI could be used to create “killer robots” for use in war, blurring the line between AI and humans and leading to job losses as robots replace humans. This could cause various social problems, such as the collapse of the existing economic system. However, the development of AI is making our lives better and more convenient. AI is not being developed with the aim of destroying humanity. The idea that “artificial intelligence will destroy humanity” may be an anxiety stemming from our ignorance. It is difficult to make accurate predictions about the future, but as experts have argued, it is also unclear whether strong AI will be able to dominate humanity. In preparation for this, it is necessary to establish defensive measures to control strong AI in case of an emergency.
One way to alleviate anxiety about AI is to think of AI as a personality. This means having faith in beings that we fear. We should also believe in ourselves. Looking back on past crises (e.g., epidemics, resource shortages), humans have always responded wisely. We have developed science and technology to build our current civilization, and AI is simply a tool that we have created. Rather than focusing on the negative aspects of AI, we should take a positive view of the role it will play and how it will develop in the future.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.