Altruism in Robotics

Looking at state of the art technology, today's robots are nowhere close to the intelligence and complexity of humans or animals, nor will they reach this stage in the near future. While it seems far-fetched for a robot's legal status to differ from that of a toaster, there is already a notable difference in how we interact with certain types of robotic objects. This occurs mainly due to our tendencies to project into them cognitive capabilities, emotions, and motivations that do not necessarily exist.

There is something about today’s robots that looks and feels different. This may be because we perceive robots differently than we do other objects that one should consider extending some level of legal protections to the latter but not the former. This conclusion is consistent with Hume’s thesis stating that if “ought” cannot be derived from “is,” then axiological decisions concerning moral value are little more than sentiments based on how we feel about something at a particular time. To give a specific example, violent behavior toward robotic objects feels wrong to many of us, even if we know that the abused object does not experience anything. Consequently, we should try to accommodate and work with, rather than against, current experiences with robots.

There are, however, a number of complications with this approach. First, basing decisions concerning moral standing on individual perceptions and sentiment can be criticized for being capricious and inconsistent. As sentiment is a matter of individual experience, it remains uncertain as to whose perceptions actually matter or make the difference?

Second, we project our own inherent qualities onto other entities to make them seem more “human-like”—qualities like emotions, intelligence, sentiments, etc. Even though these capabilities do not (for now at least) really exist in the mechanism, we project them onto the robot in such a way that we then perceive them to be something that we presume actually belongs to the robot. What ultimately matters is not what the robot actually is “in and of itself.” What makes the difference is how the mechanism comes to be perceived. It is, in other words, the way the robot appears to us that determines how it comes to be treated.

Finally, what ultimately matters is how “we” see things. The principal reason we need to consider extending legal rights to others, like robots, is for our sake. This follows the well-known argument for restricting animal abuse. As our actions toward non-humans reflect our morality, we become inhumane persons if we treat animals in inhumane ways. This logically extends to the treatment of robotic companions. This way of thinking transforms animals and robot companions into nothing more than instruments of human self-interest. The rights of others, in other words, is not about them; it is all about us.

According to another way of thinking, we are first confronted with a mess of anonymous others who intrude on us and to whom we are obligated to respond even before we know anything at all about them. Therefore, moral consideration is no longer seen as being ‘intrinsic' to the entity; instead, it is seen as something that is ‘extrinsic.' In other words, it is attributed to entities within social relations and within a social context. As we encounter and interact with others—whether they be other human persons, an animal, the natural environment, or a robot—this other entity is first and foremost situated in relationship to us. Consequently, the question of social and moral status does not necessarily depend on what the other is in its essence but on how it supervenes before us and how we decide, in "the face of the other" to respond.

Such an ethical framework is not based on "respect for others." Rather, it is about deciding how to respond to the ‘Other' who supervenes before the individual in such a way that always and already places the assumed rights and privilege of that individual in question.

When one asks “Can or should robots have rights?” the form of the question already makes an assumption, namely that rights are a kind of personal property or possession that an entity can have or should be bestowed with. However, this question can be situated otherwise as well: “What does it take for something—another human person, an animal, a mere object, or a social robot—to supervene and be revealed as Other?” This other question—a question about others that is situated otherwise—comprises a more precise and properly altruistic inquiry. It is a mode of questioning that remains open, endlessly open, to others and other forms of otherness. For this reason, it deliberately interrupts and resists the imposition of power. The gist of the problem with granting or extending rights to others is that it presupposes the existence and the maintenance of a position of power from which to do the granting.

Such a shift in our mindset has the potential to reorient the way we think about robots and the question concerning rights. This means, of course, that we would be obligated to consider all kinds of others as Other, including other human persons, animals, the natural environment, artifacts, technologies, and robots. An “altruism” that tries to limit in advance who can or should be Other would not be, strictly speaking, altruistic.

The original Data Driven Investor article was posted here. The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.