On the Ethics of Human-Robot Interaction: A New Framework

 

I. Introduction

Robots are becoming increasingly sophisticated. What were once rigid automatons confined to factory floors now walk, speak, respond to touch, and display behaviours that invite interpretation in psychological terms. Within decades, humanoid robots of considerable verisimilitude will be commercially available: robots that move fluidly, express apparent emotion, and interact with humans in ways that blur the boundary between tool and social partner.

This technological trajectory raises ethical questions for which our existing frameworks are poorly prepared. Western moral philosophy has developed rich resources for evaluating human conduct toward other humans, toward animals, and toward the natural environment. But robots fit none of these categories comfortably. They are not persons; they possess no consciousness, no interests, no welfare to be promoted or harmed. Neither are they mere tools in the way that hammers and automobiles are tools. Their designed capacity to simulate psychological properties such as emotion, pain, desire, and resistance invites forms of engagement that hammers do not. Robots occupy an ontological grey zone, and our moral concepts struggle to gain purchase.

This essay addresses a particular class of cases within this broader terrain, namely cases in which human-robot interaction produces strong moral intuitions despite the absence of any victim.

Consider the following. A company manufactures a child-sized robot with childlike features, including a small body, round face, and high voice, designed for sexual use. The robot is not conscious. It has no experiences, no capacity for suffering, and no interests susceptible to setback. No child is harmed by its use. Yet for most observers, the scenario occasions immediate moral repulsion. Something appears wrong. But what, exactly?

Or consider a company that manufactures a humanoid robot whose distinguishing feature is that it simulates non-consent. It vocalises refusal. It struggles against the user. It displays apparent distress. Users acquire this product specifically to overcome this simulated resistance. No being suffers. The robot experiences nothing. What grounds remain for moral objection?

Or consider an individual who acquires a highly realistic humanoid robot and proceeds to torture it systematically, burning it, cutting it, and methodically destroying its limbs, while the robot produces distress vocalisations. She records these sessions, laughing, and distributes the recordings. The robot experiences nothing. No being suffers. Yet moral discomfort persists.

These cases share a structure: robust moral intuitions conjoined with weak explanatory resources. There is no victim. There is no harm. There may be no measurable consequence whatsoever. The standard conceptual apparatus of moral philosophy, including welfare, rights, duties, and harm, appears to lack purchase. And yet the intuitions remain. They demand explanation.

The thesis of this essay is that the actions described are wrong not because they harm any being, but because of what they express and cultivate in the agent who performs them. The analytical instrument I shall develop, Cognitive Representation Theory, holds that the moral status of an action toward a robot is determined by the agent’s cognitive representation of that robot, not by the robot’s actual properties. This principle, I shall argue, is not an ad hoc solution to a novel problem but a formalisation of evaluative practices we already employ in other domains. Robots do not require a new ethics; they require us to articulate the ethics we already have.

II. The Limits of Patient-Centred Ethics

The three dominant traditions in Western moral philosophy, consequentialism, deontology, and virtue ethics as standardly formulated, share a structural feature that becomes problematic in the present context. All three are, in different ways, patient-centred. They presuppose, explicitly or implicitly, that moral evaluation requires a patient toward whom the action is directed and whose status grounds the evaluation.

Consequentialism

Consequentialist frameworks evaluate actions by their effects on the welfare of affected parties. An action is right insofar as it produces good consequences, and wrong insofar as it produces bad ones. The locus of moral significance is the patient whose welfare is affected.

Applied to the cases under consideration, consequentialism encounters an immediate difficulty. There are no welfare effects. The robot has no welfare. If we bracket downstream effects on the agent or third parties, as the thought experiments stipulate, there are no consequences to evaluate. The consequentialist calculus returns null.

One might attempt to rescue the framework by insisting that downstream effects cannot be bracketed. Surely such behaviours will generalise to human victims, coarsen moral sensibilities, or produce other measurable harms. This empirical hypothesis may prove correct. But observe what the move concedes. If the empirical evidence were to show no such effects, or even beneficial effects such as providing an outlet that reduces offending against actual persons, the consequentialist framework would be compelled to pronounce these acts permissible, indeed obligatory.

This verdict conflicts with the phenomenology of the intuitions. The wrongness appears to inhere in the act itself, or in the agent performing it, rather than in contingent causal sequelae. If this phenomenological report is accurate, consequentialism cannot capture what these intuitions track.

Deontology

Deontological frameworks evaluate actions by their conformity to duties or rules. On the Kantian formulation, moral agents possess obligations toward rational beings, that is, entities capable of autonomous choice, of setting their own ends, and of moral reasoning. The fundamental principle requires treating such beings as ends in themselves, never merely as means.

Robots are not rational beings in the morally relevant sense. They execute algorithms rather than setting ends. They process inputs rather than reasoning morally. Whatever duties moral agents possess, they do not appear to be owed to machines. The deontological framework, oriented toward the rights and dignity of patients, finds no patient to which such considerations apply.

One might invoke duties to oneself, specifically the Kantian obligation not to degrade one’s own rational nature. But this strategy merely relocates the puzzle. It requires an explanation of why these interactions degrade rational nature, which presupposes an account of what is wrong with them. That is the very question at issue. The explanatory circle does not close.

Virtue Ethics

Virtue ethics appears more promising. This tradition evaluates actions not by their consequences or by duties owed, but by what they reveal and reinforce about the agent’s character. The virtues are stable dispositions toward excellent action and response, and the vices their contraries. The evaluative focus is the agent rather than the patient.

Yet virtue ethics, as typically formulated, encounters its own difficulty. Consider the claim that torturing a lifelike robot expresses cruelty. A natural objection presents itself. Cruelty is a disposition to inflict suffering on those capable of suffering. The robot cannot suffer. Therefore one cannot, strictly speaking, be cruel to it. One might damage it, destroy it, or disassemble it. But cruelty, which implies a patient capable of experiencing torment, has no application.

If vices are defined relationally, by reference to actual patients who actually suffer, then virtue ethics loses traction precisely where we require it.

Diagnosis

The three frameworks fail for a common reason. Each assumes, at some level, that the agent’s representation of the situation matches the situation itself. Consequentialism evaluates actual effects on actual beings. Deontology evaluates duties owed to actual rational agents. Even virtue ethics, in defining vices by relation to suffering patients, assumes that the patient actually suffers.

In typical human interaction, this assumption is innocuous. The person I harm is actually there. The suffering I cause is actual suffering. Representation and reality align. But in human-robot interaction, they systematically diverge. The user may represent the robot as a child, a victim, or a suffering being, while the robot is none of these things. The frameworks, geared to evaluate reality, cannot accommodate cases where moral significance attaches to representation.

What is required is a framework that evaluates actions in terms of the agent’s cognitive representation rather than the object’s actual properties. The following section develops such a framework.

III. Cognitive Representation Theory

I propose the following principle for evaluating human-robot interaction.

Cognitive Representation Theory (CRT): The moral status of an agent’s action toward an object is determined by the agent’s cognitive representation of that object. Specifically:
(i) identify the agent’s cognitive representation, that is, what the agent functionally treats the object as and what properties their engagement is responsive to;
(ii) evaluate the action as if that representation were veridical;
(iii) the resulting evaluation applies to the actual action.

Several components of this formulation require elaboration.

Cognitive Representation

Cognitive representation refers not to explicit belief but to functional orientation. A person may explicitly believe, and sincerely assert, that a robot is mere machinery consisting of plastic, metal, and circuitry, while functionally treating it as a suffering child. The distinction is between what one would avow and what one’s engagement presupposes.

The relevant representation is revealed by phenomenology, specifically by what makes the experience appealing, arousing, or satisfying. Consider the user of the child-form sex robot. His arousal is responsive to the robot’s childlike features: the small body, the round face, and the high voice. These features constitute the product’s appeal. Without them, he would have selected a different product. His engagement presupposes a representation of the robot as childlike, whatever he might explicitly avow about its mechanical nature.

Similarly, the torturer’s enjoyment derives from the robot’s apparent suffering, including the distress vocalisations and the simulated pain responses. Were the robot simply to cease functioning silently when damaged, the activity would lose its appeal. Her pleasure is keyed to apparent anguish. Her engagement presupposes a representation of the robot as suffering, regardless of what she knows about its actual incapacity for experience.

The principle can be stated thus. One cannot find an object appealing in virtue of features one does not represent it as possessing. The phenomenology of appeal discloses the operative representation.

The Substitution Test

CRT instructs us to evaluate the action as if the cognitive representation were veridical. This is a counterfactual evaluation. We ask what the action would be, morally speaking, if the object actually possessed the properties the agent represents it as possessing.

The user of the child-form sex robot represents the robot as childlike. If that representation were veridical, meaning that the object actually were a child, the action would be sexual engagement with a child. That is the relevant moral characterisation, and it applies to the actual action. The non-consent simulator user represents the robot as a non-consenting victim. If veridical, the action would be rape. The torturer represents the robot as a suffering being. If veridical, the action would be cruelty toward a sentient creature.

The counterfactual is a test, not a metaphysical claim. CRT does not assert that robots are conscious, that they genuinely suffer, or that simulated children are actual children. It holds that moral evaluation tracks representation, and it employs the counterfactual as a device for extracting the moral significance of that representation.

Expression and Cultivation

CRT yields moral verdicts via two distinct but related mechanisms: expression and cultivation.

Expression: An action may express a pre-existing disposition. The paedophile’s use of a child-form robot expresses paedophilic desire, a desire that existed prior to and independently of the robotic interaction. The robot provides an occasion for the desire’s manifestation; it does not create the desire. On this dimension, the action is diagnostic. It reveals what the agent already is.

Cultivation: An action may also cultivate or reinforce a disposition. The repeated torture of lifelike robots may strengthen cruel dispositions, habituate the agent to taking pleasure in apparent suffering, and lower thresholds for similar engagement. On this dimension, the action is formative. It shapes what the agent becomes.

These mechanisms are not mutually exclusive. A single action may both express an existing vice and cultivate its further development. Nor is either mechanism strictly necessary for wrongness. An action that merely expressed a vice without reinforcing it would still be wrong qua expression. An action that merely cultivated a vice without expressing a pre-existing one would still be wrong qua cultivation. CRT captures both dimensions.

IV. The Intrinsic Nature of Character

Why should representation determine moral status? What grounds this principle?

The answer lies in the metaphysics of character. Virtue ethics evaluates agents in terms of their character, understood as the stable dispositions that constitute who they are. These dispositions consist in what the agent desires, enjoys, is drawn toward, and takes satisfaction in. These psychological states are intrinsic to the agent. They exist in the agent regardless of whether their objects exist in the world.

Consider the following. If one’s enjoyment is structured around apparent suffering, that enjoyment is real. It occurs. It is part of one’s psychological economy. Whether anything actually suffers does not reach back and alter what is occurring in one’s mind. The desire is the same desire. The pleasure is the same pleasure. The only difference is whether reality cooperates with the representation.

An analogy may illuminate this point. A person who sincerely desires to harm children possesses a vicious character regardless of whether he ever encounters a child. His desire exists in him. It constitutes part of who he is. It would manifest given appropriate circumstances. We do not require him to actualise the desire before pronouncing on his character. The desire itself is the vice.

Similarly, a person whose pleasure is keyed to apparent suffering possesses a cruel disposition regardless of whether anything actually suffers in his presence. His pleasure in apparent suffering is a psychological reality. It manifests in his engagement with the robot. That no genuine suffering occurs does not alter what is occurring in him.

Character, in short, is constituted by the agent’s psychological orientation, by what they desire, enjoy, and are drawn toward. These orientations are intrinsic. They do not depend for their existence on the existence of their objects. A person drawn to children is so drawn whether or not children are present. A person who delights in suffering so delights whether or not suffering occurs. Reality’s cooperation affects whether harm results; it does not affect what kind of person the agent is.

CRT follows from this recognition. If character is constituted by psychological orientation, and psychological orientation is intrinsic to the agent, then character must be evaluated in terms of representation rather than reality. What the agent represents themselves as engaging with, and what their desires, pleasures, and responses are keyed to, is what reveals their character. The actual properties of the object, where they diverge from the representation, are morally inert.

This is why CRT is not an ad hoc principle invented to handle a novel problem. It follows from taking seriously the claim that character is intrinsic to agents. We shall now see that this principle is already operative in our moral practice.

V. CRT in Existing Moral Practice

If CRT is sound, we should expect to find it operative in cases beyond human-robot interaction, namely cases where representation and reality diverge and where we evaluate based on the former rather than the latter. This prediction is confirmed.

Mistaken Object Cases

The mistaken predator. A man with paedophilic desires arranges to meet what he believes to be a fourteen-year-old girl. He travels to the meeting location with the intention of sexual contact. The “girl” proves to be an adult police officer conducting an investigation. No child was involved. No child was harmed.

The man is nonetheless judged, legally and morally, as having expressed paedophilic desire, as having attempted to prey upon a child, and as being the kind of person who would do such things. The absence of an actual child does not dissolve the judgment. We evaluate based on what he represented himself as doing, what his actions were oriented toward, and what his desires were responsive to. His cognitive representation was of a child. By CRT, we evaluate accordingly.

The mannequin assault. Late at night, a man encounters what he takes to be a homeless person sleeping in a doorway. He kicks and beats the figure, expressing contempt and hostility. The figure is a discarded mannequin. No person was present. No one was harmed.

We nonetheless judge this man as having expressed cruelty, as being disposed toward violence against the vulnerable, and as revealing something vicious in his character. The mannequin’s insensibility does not exculpate him. We evaluate based on his cognitive representation. He represented himself as attacking a helpless person, and that representation reveals his character.

Attempt and Intention

Attempted murder. A woman fires a weapon at her husband, intending to kill him. The weapon misfires, and the husband is unharmed. She is charged with attempted murder and faces penalties approaching those for completed murder.

Why do we punish attempted murder severely? The victim is uninjured. No harm occurred. We punish it because of what the agent was trying to bring about, namely what her intention was oriented toward and what she represented herself as doing. Her cognitive representation was of killing her husband. By CRT, that representation determines the moral status of her action.

Virtuous Representation

CRT applies equally to virtuous action under mistaken representation.

The attempted rescue. A woman perceives what she takes to be a child drowning in a river. Without hesitation, she plunges in, risking her own life. The “child” proves to be a bundle of clothing snagged on debris. No one was in danger. No one was saved.

We nonetheless judge her as having acted courageously, as having expressed compassion and self-sacrifice, and as being the kind of person who would risk herself for a stranger. The bundle’s inanimacy does not diminish her virtue. We evaluate based on her cognitive representation. She represented herself as saving a child, and that representation reveals her character.

The Principle Confirmed

These cases confirm that CRT is not a novel principle but a formalisation of existing practice. When representation and reality diverge, we evaluate agents based on the former. The paedophile who pursues an adult officer is judged by what he thought he was pursuing. The assailant who beats a mannequin is judged by what he thought he was beating. The would-be rescuer who saves nothing is judged by what she thought she was saving.

What is distinctive about human-robot interaction is not that it requires a new evaluative principle, but that it creates systematic and intentional divergence between representation and reality. Robots are designed to elicit representations that do not correspond to their actual properties. A child-form sex robot is engineered to elicit the representation “child.” Its entire commercial purpose depends on successfully producing this representation. The divergence is not accidental but constitutive.

CRT handles these cases by the same principle that handles mistaken-object cases. The difference lies merely in the source of the divergence. In one case it is error, and in the other it is design. The moral logic is identical.

VI. Application to Human-Robot Interaction

With CRT established and confirmed, we return to the cases that opened this essay.

The Child-Form Sex Robot

The user’s arousal is responsive to the robot’s childlike features. These features constitute the product’s appeal and differentiate it from adult-form alternatives. His cognitive representation, revealed by what his arousal is keyed to, is of a childlike being.

Apply CRT. If the representation were veridical, if the object actually possessed the represented properties, the action would be sexual engagement with a child. This characterisation determines the moral status. The user expresses paedophilic desire. The vice is real. Only the victim is absent.

The Non-Consent Simulator

The product’s distinguishing characteristic is its simulation of resistance. It vocalises refusal. It struggles physically. It displays apparent distress. The user’s arousal is responsive to these features. Without them, the product would hold no distinctive appeal. His cognitive representation is of a non-consenting being whose resistance he overcomes.

Apply CRT. If the representation were veridical, the action would be rape. This characterisation determines the moral status. The user expresses desire structured around violation. The vice is real. Only the victim is absent.

The Torture for Amusement

The torturer’s pleasure derives from the robot’s apparent suffering. This includes distress vocalisations, simulated pain responses, and apparent anguish as damage is inflicted. Were the robot simply to cease functioning silently, the activity would lose its appeal. Her enjoyment is keyed to apparent suffering. Her cognitive representation is of a being in torment.

Apply CRT. If the representation were veridical, the action would be torture of a sentient being. This characterisation determines the moral status. She expresses cruelty. The vice is real. Only the victim is absent.

Contrasting Cases

CRT’s explanatory power is demonstrated by its capacity to distinguish cases that are superficially similar but morally different.

The engineer’s demonstration. An engineer strikes a quadruped robot during a technical presentation to demonstrate its balance-recovery capabilities. The action is physically similar to abuse but morally neutral.

CRT explains this case as follows. The engineer’s cognitive representation is of a machine being tested. Her engagement is responsive to the robot’s technical properties, including sensors, algorithms, and mechanical responses. There is no pleasure in apparent suffering and no engagement with the robot as a being that hurts. The counterfactual evaluation yields testing equipment. No vice is expressed.

The child’s exploration. A young child disassembles a robot insect, laughing as she removes its legs. The behaviour resembles the adult torturer’s but is morally innocent.

CRT explains this case as well. The child’s cognitive representation is of an object of curiosity. Her engagement is exploratory. The laughter expresses discovery rather than sadistic pleasure. Young children may lack the conceptual apparatus required to represent suffering as suffering. The same external behaviour, mediated by different cognitive representations, carries different moral significance.

The medical simulation. A medical student practises procedures on a robot that simulates pain responses. Her goal is to minimise simulated distress through proper technique. The scenario is unproblematic.

CRT explains this outcome. The student’s cognitive representation is of a pedagogical instrument. Her engagement is keyed to learning, to skill development, and to reducing apparent suffering rather than producing it. The counterfactual evaluation yields conscientious training. No vice is expressed.

These contrasts demonstrate that CRT is not constructed to condemn. It is a general principle that distinguishes innocent from vicious engagement by attending to what the engagement is responsive to, what representation it presupposes, and what character it thereby reveals.

VII. Objections and Replies

The Video Game Objection

Objection: Millions of people regularly simulate violence in video games, including shooting, stabbing, and exploding human-appearing characters. If CRT is sound, these players express vicious character. But this verdict is implausible, since video game violence is widely regarded as morally innocuous.

Reply: The objection assumes that video game players cognitively represent their targets as persons they are murdering. Empirical phenomenology suggests otherwise. For most players, the cognitive representation is closer to “obstacles in a skill challenge,” “opponents in a competitive game,” or “antagonists in a narrative frame.” The satisfaction derives from mastery, achievement, and strategic success, not from apparent suffering.

Evidence for this can be seen as follows. Replace human-appearing enemies with robots or abstract shapes, and much of the satisfaction is preserved. This would not be the case if the appeal were keyed to apparent humanity or apparent suffering. Furthermore, violence in video games typically occurs within narrative frames of justification, such as defending innocents or defeating aggressors, such that the cognitive representation is of justified conflict rather than gratuitous cruelty.

The genuine test case is a video game designed specifically for unjustified cruelty, including torturing innocents, sexual violence, and suffering as the sole source of appeal. Such games are rare. When they exist, they are controversial and produce widespread moral discomfort. CRT predicts this reaction, and the prediction is confirmed.

Additionally, physical robots engage embodied cognition in ways that screen-based interaction does not. Kicking a robot involves one’s leg, whereas manipulating a controller does not. The embodied dimension may sustain cognitive representations that purely visual interaction cannot.

The Subjectivity Objection

Objection: CRT entails that identical physical actions can carry different moral statuses depending on the agent’s internal states. This renders morality unacceptably subjective.

Reply: The objection conflates subjectivity with agent-dependence. CRT holds that moral status depends partly on the agent’s psychology. This is agent-dependent but not subjective in any problematic sense. The agent’s cognitive representation is a fact, namely a psychological fact about what their engagement is responsive to. It is not up to the agent to decide what their representation is. It is revealed by phenomenology and constrained by logic.

Moreover, the feature the objection identifies is not unique to CRT. Any agent-focused moral framework holds that identical external actions can express different character traits depending on internal states. Giving money can express generosity or vanity. Speaking difficult truths can express honesty or cruelty. The moral significance of action has always depended on what it expresses about the agent.

What CRT provides is a criterion for identifying the relevant internal state when external anchors are absent. This criterion is the cognitive representation revealed by phenomenology. It is no more subjective than the assessments of intention and mental state that pervade legal and moral evaluation.

The Epistemic Access Objection

Objection: Cognitive representations are internal and potentially hidden. How can third parties, such as courts, regulators, or moral evaluators, determine what representation an agent held? CRT may be theoretically sound but practically inapplicable.

Reply: Cognitive representation is constrained by phenomenology and by logic. One cannot enjoy the torture robot’s distress vocalisations without one’s enjoyment being responsive to apparent distress. That is what makes them distress vocalisations rather than arbitrary sounds. One cannot find childlike features arousing while cognitively representing the object as an adult. The structure of appeal constrains the possible representations.

Furthermore, product design reveals what representations products are built to afford. A robot engineered with childlike proportions, childlike voice, and childlike behaviour is engineered to afford the representation “child.” The design makes certain representations probable and others implausible. Epistemic access to representation is not perfect, but it is bounded and tractable.

The objection also proves too much. Legal systems routinely assess intention, premeditation, and mental state. These are internal phenomena no more accessible than cognitive representation. The difficulty of epistemic access does not impugn the moral relevance of what is accessed.

The Asymmetry Objection

Objection: Does CRT work in reverse? If an agent tortures a conscious being while believing it to be a robot, is the agent absolved? If so, CRT yields implausible verdicts. If not, CRT is asymmetrical in a way that requires explanation.

Reply: CRT is indeed asymmetrical, but the asymmetry is principled. CRT provides a sufficient condition for vice. If the cognitive representation is vicious, vice is expressed regardless of reality. But it does not provide a sufficient condition for permissibility. A non-vicious representation does not render an action permissible if reality involves a victim.

VIII. Implications

The analysis yields several implications for ethics, regulation, and design.

Regulatory Implications

CRT provides grounds for regulation that do not depend on contested empirical claims about downstream effects. A society might prohibit child-form sex robots not because such prohibition has been demonstrated to reduce offending against actual children, a claim that may never be conclusively established, but because their use is constitutively vicious. The wrongness resides in the act and the agent, not in speculative consequences.

This shifts the regulatory burden. One need not prove that permitting such robots causes measurable harm. One need only establish that their use expresses vicious character. The debate becomes normative rather than empirical. The question is whether this expression of character is something a society may legitimately prohibit. That question is difficult, but it is the right question. It is not an empirical question we lack the tools to answer.

Design Implications

CRT attributes moral significance to design decisions. Robots engineered to afford vicious cognitive representations, such as simulating suffering, presenting childlike features for sexual use, or displaying apparent non-consent, create affordances for vice that simpler machines do not. The design constitutes a moral choice.

Designers cannot escape responsibility by noting that their products are “merely machines.” The machine’s ontological status is precisely what makes cognitive representation crucial, and design determines what representations the product affords. A robot designed to produce the representation “suffering child” is designed to afford vicious engagement, whatever disclaimers accompany it.

Theoretical Implications

CRT clarifies the structure of virtue ethics. The tradition has standardly formulated vices in relational terms, for example cruelty as a disposition to cause suffering and compassion as a disposition to relieve it. This formulation works when representation and reality align, as they typically do in human interaction. But it fails when they diverge.

CRT reveals that the relational formulation was always a proxy for something deeper, namely the agent’s cognitive orientation. What constitutes cruelty is not causing suffering but being disposed to enjoy apparent suffering, that is, to have one’s pleasure keyed to the appearance of another’s pain. The suffering need not be actual. The representation suffices. Robots make this visible by creating cases where representation and reality cleanly diverge.

This is not a revision of virtue ethics but a clarification. Virtue ethics was always agent-focused. CRT makes explicit what that focus entails when we attend carefully to the metaphysics of character.

IX. Conclusion

Robots of increasing sophistication and verisimilitude will become common features of human life within the coming decades. Our interactions with them will be varied, intimate, and morally significant. We require frameworks for evaluating these interactions, frameworks that neither dismiss them as morally irrelevant nor inflate them into harms against non-existent victims.

Cognitive Representation Theory provides such a framework. It holds that the moral status of an action toward a robot is determined by the agent’s cognitive representation, that is, what the agent functionally treats the robot as and what their engagement is responsive to. The principle is applied via a counterfactual substitution test. We evaluate the action as if the representation were veridical. The resulting moral characterisation attaches to the actual action.

This principle is not an innovation but a formalisation of existing practice. We already evaluate agents based on cognitive representation in cases of mistaken object, attempt, and intention. CRT articulates what these practices presuppose, namely that character is intrinsic to the agent and constituted by psychological orientation rather than by the cooperation of external reality.

Applied to the cases that opened this essay, CRT yields determinate verdicts. The child-form sex robot expresses paedophilic desire. The non-consent simulator expresses desire for violation. The torture for amusement expresses cruelty. These are not consequences of harming anyone. They are constitutive of what the agents are.

The vice is real. Only the victim is absent.

The reason is that ethics is not solely agent-focused. Consequences matter, and harm to actual victims matters. CRT governs the agent-focused dimension, namely what the action reveals about character, but it does not exhaust ethical evaluation. An agent who tortures a conscious being while believing it to be a robot has caused real harm to a real victim. That harm is wrong regardless of representation. The agent may be less culpable than one who knew the victim’s nature, but is not absolved.

The asymmetry reflects the structure of moral evaluation itself. Viciousness of character is sufficient for moral criticism. Blamelessness of character is not sufficient for moral permission, because actions have consequences beyond their revelations of character. CRT captures the agent-focused dimension, but it does not claim to capture the whole of ethics.