Robots with intelligence and morality—can we hold them accountable?

This blog post delves deeply into whether we can entrust judgment and responsibility to robots that resemble humans.

 

If robots with intelligence and moral standards close to humans are created, could we entrust them not only with simple labor but also with the intellectual judgments humans make? You might think this is a topic more suited for science fiction movies or novels. However, the United States has recently developed combat robots, primitive forms reminiscent of those seen in movies like Star Wars. These robots operate on the battlefield, distinguishing between allies and enemies and killing the latter, sparking controversy over ethical issues. This debate begins with whether robots can truly differentiate between allies, enemies, and civilians, extending to the ethical problems associated with killing people. One might consider this a special circumstance of war, but if the related technology expands into everyday life, we could see ‘robots that make their own judgments’ in reality, something previously only seen in movies.
My answer to this question is ‘we cannot entrust it to robots’. No matter how advanced robots become, people will never be able to entrust everything to them. Those who disagree argue that the information processing capabilities of the various robots we currently use are superior to ours. They also claim that since human thought processes are based on electrical signals in the brain, robots can replace us in our tasks. However, I still believe we cannot entrust robots with sophisticated intellectual judgment. I will present two main reasons for this: the issue of responsibility and the problem of value judgment.
Before developing my argument, I must make several assumptions for a more sophisticated discussion. The first is that the robots we discuss possess intelligence and moral standards close to those of humans. Robots already surpass us in information processing capabilities. However, the intelligence referred to here encompasses not just information processing, but also the morality, emotions, and situational judgment humans employ in decision-making. This assumption differs from other writings that typically depict robots as emotionless machines. In other words, it assumes the distinct gap in empathy currently existing between robots and humans has largely disappeared. The second assumption is that even as robots become more human-like, they remain products subject to industrial regulations, performance standards, and relevant laws. Now, let us begin the discussion in earnest.
First, we cannot entrust all human tasks to robots because responsibility becomes unclear when robots make mistakes. Just as the computers, smartphones, and appliances we use today can malfunction, robots can also make errors. Often, when our current machines malfunction, the consequences are limited to minor delays or inconveniences. However, if robots were to replace human judgment, robots capable of making critical decisions would be developed. Should such a robot malfunction, the consequences could be far more severe. Indeed, in the United States, a system error at a power plant once caused a statewide blackout. While this incident occurred because the system was relatively simple, we cannot assume that sophisticated robots are immune to malfunction. Currently, robots are not frequently used for such critical decisions. However, if robots advance later and are used more extensively for critical tasks, their impact will be impossible to ignore.
But when a robot makes a mistake, can it be held accountable? When humans make mistakes, they risk damage to their reputation or property, motivating them to avoid errors and exercise greater caution when making major decisions. Furthermore, even with such efforts, they face consequences when mistakes occur. But even if a robot possesses human-like intelligence, it remains merely a product without agency. Robots cannot own money or reputation, so they cannot receive compensation or punishment.
So, when a robot malfunctions, who bears the responsibility? Whether you hold the manager, manufacturer, owner, or user accountable, no one has direct responsibility. In some cases, even if the user is the victim, they might have to bear the responsibility themselves. Because of these issues, we cannot entrust all tasks to robots or let them make all decisions. Robots can propose solutions or process information quickly, but the final decision must ultimately be made by humans.
Of course, some argue that as technology advances, robots could feel pain like humans, making it possible to punish them. Humans feel guilt when their mistakes harm others and face social punishment. However, even if a robot feels remorse, it is a product and cannot compensate or take responsibility itself. Furthermore, while physical punishment might be used as a means of retribution as in pre-modern human societies, inflicting physical harm on a robot for punishment, even if it feels pain like humans, would be nothing short of barbaric. Furthermore, the meaning of executing a robot is questionable.
Regarding this, a counterargument suggests that establishing legal regulations beforehand would clarify responsibility when a robot malfunctions. Just as humans face legal consequences for wrongdoing, robots could have their responsibility defined by pre-established laws, ensuring someone bears that responsibility. However, this counterargument is practically impossible for two major reasons. First, even with a legal basis, the problem of interpretation remains. As often seen in cases where legal innocence is claimed, the law is not as clear-cut as one might think. Laws contain conflicting clauses and numerous factors to consider, and even applying the same legal provision can lead to different rulings. In other words, the law is not a perfect tool that solves all problems; it merely provides a basis.
Second, holding someone legally accountable may be practically impossible. Consider the scenario where liability could potentially be assigned to the manufacturer, owner, or user. If we hold the manufacturer responsible, they cannot be held accountable indefinitely for the robot’s malfunctions. While the manufacturer might bear significant responsibility for errors present at the product’s initial release, it’s unrealistic to expect the product to remain in its original state over time. Just as laptops or smartphones we commonly use have a warranty period of 1-2 years, robots would also have a warranty period. After this period expires, it would be difficult to hold the manufacturer significantly liable. Of course, this aspect could potentially be addressed by establishing legal standards based on the warranty period.
In that case, liability would primarily fall on the owner or user. The problem here is that as robots increasingly replace human judgment, the scale of damage caused by malfunctions could also grow significantly. As illustrated by the power plant system error case in Moral Machines: Teaching Robots Right from Wrong, the scale of damage might not be limited to individuals or small groups. The burden of responsibility could become too heavy for a single individual or company. In such cases, since robots lack the capacity to compensate, holding them accountable would become meaningless. Furthermore, if the owner is a nation-state rather than an individual or organization, an ironic situation could arise where affected citizens receive compensation from that same state using tax money.
Holding the user accountable could also be unfair, as mentioned earlier. If the victim and user are the same person, an unreasonable situation could occur where the user is held responsible for their own harm. The same applies to the owner’s position. Robots possess intelligence and morality close to that of humans, so they would likely be left to judge and operate autonomously. If an owner is held responsible for a robot’s fault simply because they permitted its operation without intervening, it would be unfair. Establishing liability through law does not solve the problem.
Now, I will explain the second reason why we cannot entrust robots with decisions that replace humans. Robots cannot make value judgments. This may seem contradictory to the earlier assumption, but even if we scientifically enable robots to make value judgments, the question remains whether those judgments would be socially acceptable. Even if a robot possesses intelligence and morality close to humans, can it make desirable decisions that cause no harm? Looking at our society, the likelihood is high that they would not. Most people know what is right and wrong, yet they sometimes act wrongly. This is due to judgments influenced by circumstances or personal values. Some commit crimes, while others do not, even in identical situations. This stems from differences in human will and value judgments.
In other words, robots would also make their own judgments like humans, and in such cases, predicting the outcomes would become difficult. Like V.I.K.I. in the movie I, Robot, robots might even harm humans to suppress humanity’s destructive nature. Due to this unpredictability, even if robots possess intelligence and morality close to humans, we cannot entrust them with all human tasks. Some might argue that robots, since they only execute given commands, cannot cause harm. However, the problem is that we ourselves cannot always be certain about what is right. Just as utilitarianism and Kantian deontology sometimes lead to conflicting moral conclusions, each theory takes a different stance on the issue of lying for a good cause.
Both utilitarianism and deontology provide criteria for moral judgment, but the motives behind those judgments are not always moral. For example, while the US used the term ‘axis of evil’ to justify its war in the Middle East, the interests of US military contractors likely played a role in the background. Similarly, even if a robot makes judgments based on moral theory, it could potentially abuse those judgments just as humans might. Because the very criteria for moral judgment are unclear, we still cannot entrust human decisions to robots; the final decision must be made by humans.
This argument might prompt the question: ‘Don’t humans face the same problem?’ Humans can also disagree when making decisions. Some argue that robots, with superior information processing capabilities, could make better decisions. They even suggest that if robots can make value judgments, multiple robots could reach a decision through discussion. I oppose this argument for two reasons.
The first reason is that unpredictable robots will cease to be used by people. Robots remain products, so they must perform the tasks we desire. If a robot exhibits unforeseen behavior and fails to act as intended, it cannot replace us in our work. For instance, imagine trying to type ‘a’ at the beginning of a sentence in a word processor, only for it to keep changing to ‘A’ – this would be extremely frustrating. If robots start making their own value judgments, they are more likely to cause inconvenience than convenience.
The second reason is that not all decisions are made by groups of people. While it might be possible for multiple robots to gather and make a decision, in places lacking the resources, a single robot will likely make the decision. If the robot makes a wrong decision in such a situation, the damage could be significant. Furthermore, similar to the conflict between utilitarianism and deontology mentioned earlier, it might fail to reach consensus on conflicting moral standards.
The third reason we cannot entrust human tasks to robots is that they cannot perform creative work. One might counter that human creativity is ultimately based on experience too. However, a robot demonstrating creativity is different from generating new ideas. A chess robot defeats champions simply by calculating every possible move, not by devising new strategies. While robots can remember more cases and solve problems based on statistics, this actually highlights the difference between humans and robots more clearly. Judgments based on statistics can overlook minor variations or extremely rare cases, and humans may be better suited to identifying such unusual instances.
Even when using deduction, one of today’s scientific research methods, robots may underestimate the probability of rare possibilities. This tendency can limit a robot’s ability to formulate hypotheses. Among scientific theories, innovative ideas that overturn prevailing consensus often emerge, and these frequently cannot be derived from thinking based solely on statistics or existing data. Furthermore, robots also have limitations in information processing capacity, and no single robot can conduct all research in the world. Deciding which fields to research and proposing new directions will still be the domain of humans.
We have discussed three reasons why we cannot entrust human work to robots and the counterarguments to them. Of course, robots possessing intelligence and morality close to humans have not yet been created, and their feasibility remains uncertain. However, as science and technology advance rapidly, ethical standards have sometimes failed to keep pace. Just as the problems with nuclear bombs were raised after their development, the impact of creating human-like robots would be immense, and it might be too late to discuss it only then. This discussion must begin now.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.