Should we grant moral rights or responsibilities to emotionless robots?

This blog post examines whether artificial intelligence and robots can be subjects of moral rights or responsibilities even without emotions, exploring the limitations of human-centered ethics and new standards.

 

AlphaGo’s defeat of the world’s top human Go player fundamentally challenged the privileged position humans have held in the natural world, becoming a catalyst that questioned the long-standing human-centered tradition presupposed by ethics. We now face the question of whether we should recognize artificial intelligence, possessing intelligence similar to or sometimes surpassing humans, as subjects worthy of moral consideration. Those who cannot readily agree with this question seek the core of humanity not in intellectual capacity but in the emotional realm—in feelings like joy and sorrow, fear and compassion. For instance, AlphaGo cannot rejoice in its competitive victory, and this is precisely why we cannot raise a toast with it. The expectation that even if specific human tasks are replaced by AI-equipped robots, the act of reading human emotions and engaging in emotional interaction with humans will remain difficult to replace stems from this line of thinking.
However, recent AI research is actively pursuing the creation of robots with emotions—the implementation of artificial emotion—and this aspiration is growing increasingly fervent. Robots assisting humans in caregiving and therapeutic processes will be able to respond intricately to users’ nuanced needs, and indeed, several countries are now actively developing such emotion-based care robots. People may gradually come to regard robots capable of emotional communion as family members. So, will robots become beings possessing human-like emotions and interacting with humans? And should robots be accepted into the moral community? To answer this question, we must first reflect on the core role emotions play for humans. Just as artificial intelligence research has been a process of both mimicking human thought processes and gaining deeper insight into human cognition, artificial emotion research is also an attempt to create machines that resemble human emotions. It is a process of understanding the essence of human emotions by analyzing emotional processes through computational models.
Unlike cognitive processes, emotions function to help an organism maintain survival and homeostasis with relatively little information. They also serve a motivational role, determining what to pursue and what to avoid. In social interactions, humans read subtle emotions through each other’s physical reactions or facial expressions, respond appropriately to that information, and maintain their community through this emotional communion.
However, determining whether a robot actually experiences such emotions is far from simple. Philosophers have long argued that even if artificial intelligence performs the same cognitive tasks as humans, it cannot be considered true intelligence if it lacks understanding of meaning. The same logic applies to artificial emotions. If emotions are defined as internal emotional experiences rather than merely a series of behavioral patterns producing appropriate outputs to input stimuli, artificial emotions cannot immediately be equated with human emotions. Even in the case of humans, identical behavior does not guarantee identical mental states; two people exhibiting the same actions may feel different emotions, and vice versa. For robots, identical behavior does not even imply the existence of mental states.
For a robot to possess emotions, it must not only recognize and express them but also generate internal emotions independently. However, this requires preconditions that are realistically difficult to fulfill. First, it is assumed that an emotional being possesses basic impulses or desires. Without instinctual desires like thirst, hunger, or fatigue, or motivational bases such as the desire for achievement or exploration, emotions cannot exist. Second, to possess emotions similar to those humans have for social interaction, a robot must possess at least the general intelligence of a higher animal and, like living organisms, be able to adapt to complex and unpredictable environments. However, the implementation of general intelligence capable of autonomously adapting and acting within complex environments remains a distant challenge. Current artificial intelligence research focuses on how efficiently specific tasks are solved within defined domains, treating other problems as secondary. Therefore, there is still no basis for accepting robots without genuine emotions as members of a moral community.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.