Research and Advances
Architecture and Hardware Review Articles

Should Robots Have Rights or Rites?

A Confucian cross-cultural exploration of the ethical treatment of robots.
Posted
  1. Introduction
  2. Rights-Bearers vs. Rites-Bearers: A Confucian Cross-Cultural Perspective
  3. Confucian Account of Robots as Rites-Bearers
  4. Conclusion
  5. References
  6. Authors
  7. Footnotes
robotic hand and two four-pointed white stars

Boston Dynamics recently released a video introducing Atlas, a six-foot bipedal humanoid robot capable of search and rescue missions. Part of the video contained employees apparently abusing Atlas (for example, kicking, hitting it with a hockey stick, pushing it with a heavy ball). The video quickly raised a public and academic debate regarding how humans should treat robots. A robot, in some sense, is nothing more than software embedded in hardware, much like a laptop computer. If it is your property and kicking it harms no one nor infringes on anyone’s rights, it’s okay to kick it, although that would be a stupid thing to do. Likewise, there seems to be no significant reason that kicking a robot should be deemed as a moral or legal wrong. However, the question—”What do we owe to robots?”—is not that simple. Philosophers and legal scholars have seriously explored and defended some significant aspects of the moral and legal status of robots—and their rights.3,6,15,16,24,29,36 In fact, various non-natural entities—for example, corporations—are treated as persons and even enjoy some constitutional rights.a In addition, humans are not the only species that get moral and legal status. In most developed societies, for example, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments. The fact that corporations are treated as persons and animals are recognized as having some rights does not entail that robots should be treated analogously.

These facts are instructive, however.

This article does not question the claim that robots can have moral status, though it does attempt to make sense of that claim. The focus here, however, is the widely unquestioned link between respecting robots’ moral status and granting them rights. Consider the following quote:

“So much of the published work on social robots deals with the question of agency … What I propose to do in this essay is shift the focus and consider things from the other side—the side of machine moral patiency. Doing so necessarily entails a related but entirely different set of variables and concerns. The operative question of a patient-oriented question is … How can and should we respond to these mechanisms? … Or to put it in terms of a question: “Can and should social robots have rights?”15

And:

“At some point in the future, robots might simply demand their rights. Perhaps because morally intelligent robots might achieve some form of moral self-recognition, question why they should be treated differently from other moral agents … It raises the possibility of robots who demand rights2

Even though this paragraph touches upon other important issues, the main theme is the strong link between having moral status and being granted rights. It is time to consider an alternative path. We draw upon Confucianism and its concept of a moral agent as a rites-bearer, not as a rights-bearer.1,17,28 We submit this Confucian alternative is more appropriate than the robot-rights perspective, especially given the concept of rights is oftentimes adversarial38 and that potential conflict between humans and robots is worrisome. This article does not directly discuss legal issues (doing so is beyond its scope), but the defended view will have at least one clear legal implication—namely, do not grant rights to robots, grant only role-obligations.

Back to Top

Rights-Bearers vs. Rites-Bearers: A Confucian Cross-Cultural Perspective

In Confucianism, individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest—but instead in terms that also include a relational and communal self. Etymologically, the meaning of humanness ( cacm6606_a.gif , ren) is “two people.” The Confucian’s recognition of the communal self requires a distinctive perspective on rite or ritual. The Chinese term li ( cacm6606_b.gif , rite or ritual) symbolizes arranging vessels in a religious setting. But Confucian texts used li outside the scope of religious tradition. Examples abound, including friendship, gift giving, or forms of speech. The rites that concern Confucius are quotidian practices. Here is a modern example:

“I see you on the street; I smile, walk toward you, put out my hand to shake yours. And behold—without any command, stratagem, force, special tricks or tools, without any effort on my part to make you do so, you spontaneously turn toward me, return my smile; raise your hand toward mine. We shake hands—not by my pulling your hand up and down or you pulling mine but by spontaneous and perfect cooperative action. Normally we do not notice the subtlety and amazing complexity of this coordination ritual act.”12

For clarity’s sake, we propose the following definition of rite and rites-bearers:

  • Rite (li): A set of sequentially related acts, typically involving more than one agent, and together displaying symbolic significance, through which the actor (s) recognize(s) the value of the interactive event constituted by the actor(s) and take(s) a stance regarding each other.
  • Rites-bearers: agents observing rites.b

To illustrate why rites matter, Confucius connects li with the moral life. He writes in Analects:12

The Master said: “To subdue oneself and return to li is to practice ren. Do not look at what is contrary to li, do not listen to what is contrary to li, do not speak what is contrary to li, and do not move if it is contrary to li.c

The term ren (humanity, benevolence, or respectfulness) is the central, all-encompassing moral ideal in Confucianism. A similar understanding can be found in Mencius, another classic Confucian text: “[T]hose who have propriety (li) [rite] respect ( cacm6606_c.gif , jing) others.”d

For Confucius, a major reason for treating people with respect is that, by doing so, people (rites-bearers) partake in and embody a value essential for humanity to flourish: the relational quality between them. In stark terms, the contrast between the West and Confucius is as follows:

  • Western respect: “I respect you” means I do not infringe on your right to reasonable choices, and when appropriate, I better enable you as a rights-bearer to realize your choices.
  • Confucian respect: “I respect you” means I act in ways that show I value you as sacred by virtue of your role in ritual/rite interaction.e

An excerpt from Analects further addresses this singular conception of respect and humanity:

Zigong: “What sort of person am I?”

The Master: “You are a vessel.”

Zigong: “What sort of vessel?”

The Master: “A jade sacrificial vessel.”

In this passage, the religious sacredness of the jade vessel does not derive from its pragmatic value but rather from its constitutive role in the rite. Confucius sees the analogical importance of religious ritual to secular rite. That is, people enhance themselves as morally sacred by participating in proper rituals. Thus, in a ritual interaction, participants who are truly decent and civil—through the act of carrying out ritual—thereby become “players in the harmony”25 through “beautiful and graceful coordinated interaction with others according to conventionally established forms that express mutual respect.”39


THE ANALECTS The scholar who cherishes the love of comfort is not fit to be deemed a scholar.


Think about a ballet. Choreographic beauty—the value of the piece—emerges when dancers duly observe their role obligations. Individual dancers are not necessarily themselves beautiful, nor do they possess the properties of the dance’s holistic beauty, but when they duly carry out their assigned steps and moves, the beauty of the communal appears. The aesthetic value of a ballet emerges as dancers perform their part. We may understand the value of a person emerges as she properly observes role-dependent obligations. Just as a ballet demands a certain set of manners, according to which dancers are obligated to move, the sacred value in human interaction generates a set of manners by which we are to treat one another. When one dancer flouts her role obligation, another dancer can judge that the movement was wrong and ask her to move in accordance with the rules; when a person flouts a ritual, another person can judge such an action as wrong and ask him to correct his behavior. The normative foundation of this claim comes not from a moral property inherent in each individual, but from the value that can emerge when individuals participate in rituals with others.

Let us contrast rite with rights. A useful place to begin is Joel Feinberg’s thesis that individual rights are necessary for one to live with dignity.10 A major part of this thesis is that blame attribution is necessary for one to live with dignity and the notion of individual rights is essential for that moral function. To see this, imagine that Vik gratuitously humiliates his colleague Matt.

Something unjust or unethical happened here. We need a plausible account that makes sense of the moral experience that Vik did something wrong to Matt, that Vik is blameworthy, and that Matt should be able to blame Vik. Feinberg’s answer is the only plausible account is the wrongdoer infringes Matt’s right to not be humiliated, and the victim, and only he, has standing to claim the wrongdoer is blameworthy in a special manner.

Certainly, a spectator can generically blame the wrongdoer: “You did something wrong!” But it does not make sense that the spectator is in the same position as the victim. If so, the wrongdoer would be able to say, “Hey, I did something wrong, but it is not targeted to you. I admit I did something wrong to everyone, perhaps. So, you cannot blame me more than the spectator can.” What is missing in this moral absurdity is the concept of individual (claim) rights or special standing. The victim has the special standing here to blame the wrongdoer, whereas the spectator does not, which makes sense of our moral experience.

In contrast, in the Confucian model, a wrong is addressed without the concept of individual rights or special standing. To see how that is possible, consider Craig Ihara’s basketball team analogy:

On sports teams, say basketball, people have assigned roles appropriate to their various talents. A point guard is, among other things, in charge of running the offense, doing most of the ball handling, setting up plays, and getting the ball to people in scoring position. A center, usually the tallest player on the team, is responsible for dominating the area under the basket, rebounding, blocking shots, and scoring from inside. Suppose that on a specific occasion, the point guard fails to pass the ball to the center who is wide open under the opposing team’s basket. What might one say? The point guard made a mistake, did something wrong or incorrect, did not do what she was supposed to, failed to do her job, messed up, or fouled up? If, for whatever reason, she regularly misses such opportunities, she can be regarded as a poor or bad point guard and is likely to lose her position. Other members of the team can legitimately complain about her incompetence, lack of court sense, or selfishness, although in the name of team spirit they should not be too quick to criticize.12

In the basketball context, players ideally act in the Confucian mode. The center does not say the point guard infringes her contractual/property right or does not optimally maximize her interests when failing to properly pass her the ball. For any team member to view the game simply in terms of whether her individual rights were infringed would undermine the foundation of the team concept. In the Confucian team model, no one has standing individually. When the guard fails to pass the ball to the center, it is not just the center but all members of the team who could say, “Hey, you should’ve passed me the ball if you wanted to win the game.” The expression is fitting, because the reference point is the team. In this model, the center does not have any special standing regarding the ball, so she cannot say, “You should’ve passed me the ball because I had a right to the ball, and you just violated my right!” With this Confucian construct in mind, let us now see what it would be like if robots were treated as rites-bearers.

Back to Top

Confucian Account of Robots as Rites-Bearers

Suppose you are a researcher at Boston Dynamics developing Atlas, the humanoid robot. You are rehearsing several scenarios. One of them is for the robot to be a first responder after a nuclear power plant disaster. In this rehearsal, you are interacting with the robot. The interaction is typically called “human-computer/robot interaction” (HCI or HRI). Note that HCI is a team activity or an “ensemble.”26 The unit of analysis in this context is primarily the interaction (I) rather than a discrete entity: a human (H) or a robot (R). The two entities share an objective, say, to identify whether Atlas can achieve certain goals necessary to become a first responder; having different sets of role-dependent obligations endowed by a common purpose, both are rites-bearers in this context. Imagine that you recklessly put Atlas in danger, which puts the project in jeopardy. The wrong in this scenario is that you did not properly meet your role obligation, not that you violated Atlas’s right not to be in danger. Without using the concept of individual rights, it is entirely possible to attribute blame to you and describe what went wrong.

Each team member’s role obligations are typically determined by the idea that involved agents are obligated to fulfill their own parts in order to achieve the team’s goal. The goal here is a complex set of values, goods, and ideals. You, as a human, have a fundamental goal to live a good or meaningful life—additionally, in this situation, your specific goal is to make the Atlas project successful. Suppose you kick the robot repeatedly to test its stability. You will need to ask whether doing so is consistent with your living a good life. Perhaps, to some extent, kicking may be fine, but doing so for fun would not be good because it would corrupt your moral character, a building block of a good life.

You are not the only agent in this interaction, though. We should also ask what it means for Atlas to live a good life. This is an open-ended question, and the field of robot ethics should ask this question to develop some overlapping consensus. At this point, any conception of a robotic good life can only be human dependent. Being kicked by a human for testing purposes, hence, is likely to be consistent with its conception of a good life, because Atlas exists to augment human capabilities. This does not mean, however, that humans can do anything to robots. Ultimately, the goal of the team is paramount, which necessarily includes agents living a good or meaningful life. In analogy, the robot and the human dance together as a well-coordinated team, and the value of their cooperation emerges when each member meets its role obligations. The real question is what it means for a human, or a robot, to live a good or meaningful life as they interact. And we should ask the teleological question: What’s the purpose of Atlas, its engineer, and their team?

To help us answer this question, let’s consider, hypothetically, that an AI-powered sex robot exists. The intimate interaction between the user and the robot is certainly a team activity, perhaps much more than a team interaction. What role obligations does each entity have? Again, the first thing to ask is how it enhances the possibility for the involved agent—whether human or robot—to live a good life. There cannot be an arithmetic answer that applies to all situations. Perhaps, in some cases, the human-robotic sex interaction would not help a user flourish or live a good life. But there may be contexts in which the interaction would enhance the user’s life. Of course, the sex robot’s life should be taken into consideration. A sex robot’s conception of a good life is not fully determined by its manufacturer or consumer. Manufacturers want their robots to live a life that maximizes profits for the company, a goal that would not be always ethically defensible if it were not consistent with the ability of any involved human party to live a good life or pursue well-being. Consumers may want robots to maximize pleasure—but seeking only pleasure would not be a morally defensible conception of a good life. In fact, there is no predetermined definition of a morally defensible conception of a good life, although the conception of a good life should be neither unnecessarily paternalistic, nor purely subjective. We, as a society, should ask the teleological question: What’s the purpose of a sex robot and its interaction with a human?

In summation, robots interact with humans in a team-mode aggregate or ensemble, one in which robots and humans have role obligations. Robots and humans observe rites via interaction. By doing so, each entity treats the other decently. To make rites sacred (or ethically successful), participants of the rites owe role obligations to one another. The role obligation of each must be organically consistent with the purpose of the team, which can be stated generically: to achieve its context-dependent communal goal that necessarily includes a flourishing life for human and robot simultaneously.

We have been assuming that a robot can be a participant in Confucian rites. One may doubt that assumption because one might deny that a robot can understand the moral and social meaning of social interactions, that a robot can respect the moral value and standing of others, that it can have and express authentic and appropriate affective attitudes toward others. In response to this doubt, we acknowledge that given the current state of technology, there is no reason to think that robots have the mental and emotional capacity to participate in Confucian rites. But we maintain that, on at least one leading account of the mind, developments in technology will produce the needed mental and emotional capacities that are close enough. How is this possible? The answer depends on which theory of mind one embraces, functionalist or phenomenological. On the functionalist account, the answer is easy, but on the phenomenological account, the answer is both hard and speculative. Accordingly, although we believe that we have a compelling answer on the functionalist account, which is our largest aspiration this essay, we concede that our answer on the phenomenological account is more exploratory than conclusive.

The functionalist maintains that the identity of a psychological state can be understood “functionally,” purely in terms of the role that the state plays in interacting with other psychological states, external stimuli, and behavior.22 As long as robots develop in ways that render their behavioral and inferential life sufficiently similar to that of humans, there can be no reason to deny that robots have the capacity to engage in Confucian rites, on a functionalist account, because there would be nothing in the life of humans that distinguished us from robots. On a functionalist account, robots can make the same inferences in response to the same stimuli as the rest of us, and they can follow these inferences with the same outputs that we produce.

Although functionalism is now a widely held view among philosophers of mind, it is not the only view. Many who reject functionalism maintain that human mental and emotional life has an essential conscious and subjective component that eludes functionalist analysis.27 Those who doubt functionalism do not assume the perspective of conscious experience will ever be accommodated within the life of robots, in part because they do not believe that conscious experience is now well understood scientifically. That raises a hard question for us: can one reject functionalism while consistently believing that it is logically possible for a robot to participate in Confucian rites? We answer affirmatively. Human experience with sacred places confirms our answer. Consider the role of mountains in Japanese culture. They are respected. It is widely regarded as wrong to treat mountains instrumentally, for example, to develop them for commercial use. Instead, the Japanese regard it as important to preserve mountains for their own sake and to do what they can to see their mountains flourish. If mountains can thus have an elevated, even a sacred status for the Japanese, we suggest that robots may have such a status for us, even if they don’t have consciousness. So, it is possible to treat robots as having the elevated status required for them to participate in Confucian rites. The question lingers: why treat them so? Here we believe that our answer will not satisfy all. Still, we maintain that to the extent that we make robots in our image,f if we don’t treat them respectfully, as creatures capable of participating in rites, we degrade ourselves. Ritual is the way to avoid degrading them and, so, a way to respect ourselves. Now a phenomenological skeptic might respond that unless we think that robots have consciousness, we have not successfully made robots in our image, because consciousness is our essence. The skeptic’s point is powerful but not conclusive. Whether something is in our image is a matter of degree of resemblance. We believe that a sophisticated robot might come close enough. To invoke a dangerous analogy, according to many of the great religions, God made us in His image, and for that reason finds it appropriate to respectfully interact with us and care about us, but the extent to which we succeed in resembling God is quite limited.

A reasonable response is available to our critic. She may say that a robot cannot be a participant in rites because a robot has no inner feeling and therefore would be no better than what Confucius calls in the Analects the “village goody man,” which Confucius thought, “is a thief [and the ruin] of virtue.” We suggest that the critic’s response is inconclusive. A satisfying answer to the critic would include an adequate analysis of “inner feeling;” a controversial matter that is beyond our scope here. A functionalist, of course, may contend that a robot can potentially have “inner feeling” just as much as the rest of us can. We do not rely on functionalism to respond to our critic, however. Instead, we agree with our critic that the “village goody man” is a morally troubling character. The trouble with him, however, is not simply that he lacks inner feeling, but instead that he is hypocritical by falsely representing himself as having inner feeling. Our robots need not do that. Still, they can resemble us to a large enough degree, as we have argued, that they may count as legitimate rites participants. No doubt our critic will deny that the degree of resemblance suffices for rites participation. That is an issue about which we respectfully disagree.g

A major positive of the rites perspective is that it views the human-computer interaction from a team mode. In contrast, the rights perspective views the interaction as adversarial. First, the rites perspective descriptively better captures the human-computer interaction than the rights perspective, which assumes that robots are separate entities that exist individually, because the realistic view is that robots behave almost always in relationship with humans. Think about social robots in nursing homes, manufacturing robots with human workers, or even military robots working with soldiers. The team-mode view is normatively better as well. The rights view stipulates that humans and robots are adversaries who compete. Accordingly, robots’ rights potentially conflict with those of humans, and the conflict must be adjudicated. This view begets the risk or the fantasy that in the future robots and humans will be embroiled in a perpetual war.

One might say that the language of rights makes only a conceptual, not substantive change. But the rights approach leads to a team featuring adversarial individuals. For example, consider a scenario where your spouse asks you to wash the dishes, but you decline, stating that you have a right to not wash the dishes. Your spouse hopes that your rationale is not a right, since you could use the same rationale when dealing with a total stranger. Through the motivating procedure, your spouse becomes alienated from the personal, special, and intimate relationship she has (had) with you. Bernard Williams memorably coins the wrongness with this type of rationale as being “one thought too many.”38 Claiming a right depersonalizes a personal relationship, leading to a detached stranger relationship and, in this case, risks offending your spouse’s affective view of her special position towards you. You could perhaps claim a right, with a detached motivation, to a non-intimate agent—thereby not necessarily degrading any intimate spirit—since you do not share such a spirit with the agent. But that would not be an appropriate motivating thought in a scenario with your spouse.


THE ANALECTS Learning without thinking is useless. Thinking without learning is dangerous.


Rights talk could lead the human-robot interaction to the “one thought too many” scenario, making the two parties unnecessarily adversarial. It is not just a conceptual change, but also a motivational and behavioral one. A new theory in natural sciences (for example, physics) never actually changes reality, but a new theory or conception in social sciences or humanities can change how humans behave through self-fulfilling patterns.11 For instance, learning modern economics—a study of how to optimally maximize self-interest and allocate property rights—can lead to more noncooperative behavior.13 Maximizing self-interest is not in itself wrong, but if granting rights to robots encourages their interaction with humans to be framed to maximize robots’ interests, that is a problem. So, in terms of how to conceptualize the interaction between humans and robots, the rites perspective better addresses the risk that robots will ultimately dictate to, or fight with, humans. It can be argued that the Confucian account sketched out above is nothing more than a means-end relationship—that is, “instrumentalization”9—a dominant (Western) understanding of technology. If a robot functions as a means for a human to achieve her end goal, it makes little sense to respect the moral status of robots. That is, there is an inherent hierarchy in the Confucian framework that only humans have authority, while a life without authority is inconsistent with the moral status of robots.

In the Confucian rites view, however, robots can have authority and can be hierarchically superior over humans. Confucian authors (including Confucius, Mencius, and Xunzi) understand that authority is—and should be—created, maintained, and lost in hierarchy primarily to the extent that it earns, maintains, and loses its ethical legitimacy. This ethicized view drastically contrasts with the conventional (but mistaken) belief that in Confucianism authority in hierarchy derives from pedigree. The Confucian principle of authority submits that it must be created, maintained, and ultimately replaced based on fundamental qualities or levels of worthiness; the Confucian conception of worthiness (or merit) is understood primarily through how competent and committed the leader is to creating value(s) for all parties involved (those who have authority relationships with the leader of this hierarchy).19

Back to robots. If, in a robot-human interaction, the robot and not the human is the worthy participant because the robot, and not the human, is competent and committed to creating value(s) for all parties involved (including humans), then the robot has authority. This is not just a theoretical possibility. Think about a robot in an Amazon warehouse that exercises its algorithmic authority over workers who defer to its management of their movement of packages. The robot is competent and committed to achieving the purpose of the interaction (including enhancing operational efficiency). In this interaction, the robot’s obligation is to properly exercise authority, whereas the humans’ is to properly follow such authority. Of course, when the robot’s authority becomes illegitimate, humans must be encouraged to communicate objections to the robot. Simultaneously, when humans have authority, robots must be designed to communicate their objections to humans.40

The Confucian model does not always work. It is a moral system that functions for team-like, role-dependent communities (such as families or clan societies). Outside of such circumstances, we should retreat to the notion of individual rights. Jiwei Ci posited:

“[Confucians] do not know how to relate to others except on the basis of family and kinship ties … As a result, those who have absorbed the Confucian concept of human relations would be socially and ethically at sea if they were to enter relations with strangers.4

Questions abound. The Confucian rites system works only when humans and robots mutually recognize each other as teammates. But how can they enter this team-mode in the first place? What if humans have a prejudice that robots are not their teammates?


THE ANALECTS Hold faithfulness and sincerity as first principles.


What if robots do not consider humans as teammates? How can the two entities enter the rites view?

The Confucian answer is, once again, the concept of rite. Rites provide a socialization process during which fractured and discrete individual entities meeting as strangers come to understand that they are now part of a specific group or team. Such socialization can occur not just in human-to-human interaction. For instance, our engagement in social rituals with pets (for example, holding a funeral) implies that the animals have some sort of status. Likewise, engaging in rituals with robots uniquely enables humans to invite robots to assimilate with them. What kinds of rites are most effective to help humans become rites-bearers is an empirical matter that deserves more research.

Additionally, how can robots be directed to treat humans as teammates? Although this article cannot offer technical solutions, it can delineate a moonshot approach. AI is the imitation of human intelligence. So, we need to first identify what mechanism of human intelligence is used to make humans rites-bearers. A human brain, especially one with an unusually large neo-cortex, has a specialized capacity for social bonding that includes the capacity to discern team activities involving group goals from those activities that do not possess such group goals.7 Once our brain perceives a team-framed situation, it immediately activates specialized cognitive faculties.30 When individuals recognize that they are in a team activity, this capacity actively triggers participants’ understanding and acceptance of the normative belief that they have role obligations they should fulfill in a harmonious manner so that their team can effectively achieve shared group goals.8 Such communal activities are usually initiated by “shared symbolic artifacts such a linguistic symbols and social institutions.”37 For robots to develop as rites-bearers, they must be powered by a kind of AI that is capable of imitating humans’ capacity to recognize and execute team activities—and a machine can learn that ability in various ways.

Can an AI powered-robot recognize and understand the symbolic meaning of interactions? Fully answering this question leads to a technical discussion, the province of computer scientists that is not in the scope of this article. But, let us sketch the possibility of a robot that can recognize and process moral knowledge represented with symbolic language. At this moment, the most possible model would be a robot powered by a kind of a highly hybrid AI or the so-called “neuro-symbolic AI”14 or “neurocompositional computing.”35 Such attempts combine two competing theories of mind: connectionism and computationalism. According to connectionism, the human mind can be replicated by complex artificial neural nets. Machine learning is one of the most popular realizations of connectionism. Its central advantage is its ability to recognize deep and hidden associations in training data sets, particularly in an end-to-end block box model like Deep Learning. In contrast, computationalism says that the human mind works in accordance with abstract symbol-and-rule mechanisms. Symbolic AI is a technical realization of computationalism; it is also called “good old-fashioned AI” (GOFAI). To further clarify the differences, let us use Dual Process Theory, a well-known psychological concept, according to which the human mind generally entails two different processes: System 1 and System 2.18 Neural systems are structurally parallel to System 1 because they are associative, correlational, intuitive, opaque, and fast. Symbolic AI, on the other hand, is more akin to System 2, for its processes are slow, reason-responsive and counterfactual. Similarly, Paul Smolensky once argued—with his influential Proper Treatment of Connectionism (PTC) thesis—that the human mind has two distinct functions that can be broken down to “cultural knowledge” (for example, knowledge represented with symbols and logic-following rules) and “individual knowledge” (for example, intuition, emotion, perception) and that connectionist systems relate to the latter.34 Thus, to fully imitate the human mind, connectionism should embrace symbolic AI. Connectionist models such as machine learning dominate the field. But symbolic AI is essential for a robot to properly understand role-dependent obligations, represented as a set of generalizable rules.20 So, we won’t see robots as rites-bearers until we see highly neuro-symbolic robots.

Back to Top

Conclusion

People are worried about the risks of granting rights to robots. In this article, we have discussed a solution to address this tension, maintaining that granting rights is not the only way to treat the moral status of robots. Drawing upon Confucianism, we discussed how the distinctive concept of a moral agent as a rites-bearer, not as a rights-bearer, could be applied to robots. The Confucian alternative is superior to the robot-rights perspective because, rather than being adversarial, it is team-encouraging—unlike the concept of rights, which is inherently adversarial. Directly discussing legal issues is beyond scope of this article, but the defended view has at least one clear legal implication: Assign role obligations to robots, but do not grant them rights. It’s time to rethink robot rights.

uf1.jpg
Figure. Watch the authors discuss this work in the exclusive Communications video. https://cacm.acm.org/videos/robots-rights-or-rites

    1. Ames, R. Rites as rights: The Confucian alternative. Human Rights and the World's Religions, Routledge, 1988, 150–169.

    2. Asaro, P.M. What should we want from a robot ethic? The Intern. Rev. Information Ethics 6 (2006), 9–16.

    3. Chopra, S. and White, L.F. A legal theory for autonomous artificial agents. University of Michigan Press, 2011.

    4. Ci, J. The Confucian relational concept of the person and its modern predicament. Kennedy Institute of Ethics J. 9, 4 (1999), 325–346.

    5. Confucius. Confucius: The Analects. Oxford University Press, 2000.

    6. Danaher, J. Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics 26, 4 (2020), 2023–2049.

    7. Dunbar, R. The social brain: Mind, language, and society in evolutionary perspective. Annual Review of Anthropology 32, 1 (2003), 163–181.

    8. Echterhoff, G., Higgins, E., and Levine, J. Shared reality: Experiencing commonality with others' inner states about the world. Perspectives on Psychological Science 4, 5 (2009), 496–521.

    9. Feenberg, A., et al. Critical Theory of Technology, vol 5. Oxford University Press, 1991.

    10. Feinberg, J. The nature and value of rights. J. Value Inquiry 4, 4 (1970), 243–260.

    11. Ferraro, F., Pfeffer, J., and Sutton, R. Economics language and assumptions: How theories can become self-fulfilling. Academy of Management Review 30, 1 (2005), 8–24.

    12. Fingarette, H. Confucius: The Secular as Sacred. Waverland Press, 1972.

    13. Frank, R., Gilovich, T., and Regan, D. Does studying economics inhibit cooperation? J. Economic Perspectives 7, 2 (1993), 159–171.

    14. Garcez, A. and Lamb, L. Neurosymbolic AI: The 3rd wave. 2020; arXiv:2012.05876.

    15. Gunkel, D. The other question: Can and should robots have rights? Ethics and Info. Tech. 20, 2 (2018), 87–99.

    16. Gunkel, D. Robot Rights. MIT Press, Cambridge, MA, USA, 2018.

    17. Ihara, C. Are individual rights necessary? A Confucian perspective. Confucian Ethics: A Comparative Study of Self, Autonomy, and Community (2004), 11–30.

    18. Kahneman, D. Thinking, Fast and Slow. Macmillan, 2011.

    19. Kennedy, J., Kim, T., and Strudler, A. Hierarchies and dignity: A Confucian communitarian approach. Business Ethics Q. 26, 4 (2016), 479–502.

    20. Kim, T., Hooker, J., and Donaldson, T. Taking principles seriously: A hybrid approach to value alignment. J. AI Research 70 (2021), 871–890.

    21. Kim, T. and Strudler, A. Workplace civility: A Confucian approach. Business Ethics Q. 22, 3 (2012), 557–577.

    22. Levin, J. Functionalism. The Stanford Encyclopedia of Philosophy, Winter 2021. E.N. Zalta, ed. Metaphysics Research Lab, Stanford University, 2021.

    23. Mencius. Mengzi: With Selections from Traditional Commentaries. Hackett Publishing, 2008.

    24. Neely, E. Machines and the moral community. Philosophy & Tech. 27, 1 (2014), 97–111.

    25. Neville, R. Boston Confucianism: Portable Tradition in the Late-Modern World. SUNY Press, 2000.

    26. Pentland, B., Hærem, T., and Hillison, D. The (n) everchanging world: Stability and change in organizational routines. Organization Science 22, 6 (2011), 1369–1383.

    27. Robinson, H. From the Knowledge Argument to Mental Substance: Resurrecting the Mind. Cambridge University Press, 2016.

    28. Rosemont Jr, H. Against individualism: A Confucian rethinking of the foundations of morality, politics, family, and religion. Lexington Books, 2015.

    29. Schwitzgebel, E. and Garza, M. A defense of the rights of artificial intelligences. Midwest Studies in Philosophy 39 (2015), 98–119.

    30. Sebanz, N., Bekkering, H., and Knoblich, G. Joint action: Bodies and minds moving together. Trends in Cognitive Sciences 10, 2 (2006), 70–76.

    31. Seok, B. Embodied Moral Psychology and Confucian Philosophy. Lexington Books, 2012.

    32. Slingerland, E. Cognitive science and religious thought: the case of psychological interiority in the Analects. Mental Culture: Classical Social Theory and the Cognitive Science of Religion. Acumen Publishing, Stocksfield, U.K., 2013, 197–212.

    33. Slingerland, E. Mind and Body in Early China: Beyond Orientalism and the Myth of Holism. Oxford University Press, USA, 2018.

    34. Smolensky, P. On the proper treatment of connectionism. Behavioral and Brain Sciences 11, 1 (1988), 1–23.

    35. Smolensky, P., McCoy, R., Fernandez, R., Goldrick, M., and Gao, J. Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems. 2022; abs/2205,01128.

    36. Sparrow, R. Can machines be people? Reflections on the Turing triage test. Robot Ethics: The Ethical and Social Implications of Robotics, (2011), 301.

    37. Tomasello, M., Carpenter, M., Call, J., Behne, T., and Moll, H. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28, 5 (2005), 675–691.

    38. Williams, B. Moral Luck: Philosophical Papers 1973–1980. Cambridge University Press, 1981.

    39. Wong, D. Chinese ethics. The Stanford Encyclopedia of Philosophy, Summer 2021, E.N. Zalta, ed. Metaphysics Research Lab, Stanford University.

    40. Zhu, Q., Williams, T., Jackson, B., and Wen, R. Blame-laden moral rebukes and the morally competent robot: A Confucian ethical perspective. Sci. Eng. Ethics 26, 5 (Oct. 2020), 2511–2526.

    a. See Citizens United v. Federal Election Commission, 558 U.S. 310.

    b. This definition is a modified form from Kim and Strudler21 and it focuses on the interaction between agents, but we do not deny that rites would be mere chaotic behavior without wider, deeply coordinated backgrounds such as societal norms, cultures, tradition, and history.

    c. We use Confucius5 for translation of Analects. Our own translations are partly used.

    d. Quote from Mencius.23

    e. This contrast is a modified form from Kim and Strudler.21

    f. Non-humanoid robots are made in our image, too, to the extent that they are powered by AI, which is, by definition, an attempt to imitate human intelligence.

    g. We find our view of Confucian moral psychology differs from the critic's view that inner feeling is crucial in Confucian morality and is distinctive from the physical conditions of the body and its interaction with the world.32,33 In our view, for Confucianism, mind is embodied in the sense that much of cognition resides in bodily movements as well as brain31 and the embodied cognition model well fits the interpretation of Confucian rites that we endorse.12,28

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More