Dangerous liaisons: Social AI and the problem with the relational turn to moral status

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

The question what moral status to ascribe to artificially intelligent entities, such as social robots, is at the forefront of academic debate. The related question as to how social AI affects the moral status we can ascribe to ourselves as human beings, by contrast, has received far less attention. Philosophers who discuss this issue propose a relational turn: human moral status should not be understood to be based on uniquely or typically human traits or capacities but derives from interpersonal interactions and practices through which people come to respect and morally value others. The aim of this article is to set out how developments in the field of social AI pose a problem not only to traits-based accounts of (human) moral status, but also to the relational approach. Our ways of morally relating to others are often mistaken and social AI heightens this risk, as it is purposefully designed to evoke moral responses in humans by displaying traits and mirroring forms of interaction that we would ascribe moral relevance in relations with fellow humans or (non-human) animals. If we cannot take the moral appearance of interactions with social AI at face value, we need further normative grounds to distinguish relations in which moral status is properly ascribed from those where it is not.
Original languageEnglish
JournalAI and Society
DOIs
Publication statusPublished - 22 Oct 2025

Keywords

  • Social AI
  • Human moral status
  • Ethics
  • Moral consideration
  • Relational turn
  • Vulnerability

Themes from the UHS research agenda

  • Critical humanism in the 21st century

Fingerprint

Dive into the research topics of 'Dangerous liaisons: Social AI and the problem with the relational turn to moral status'. Together they form a unique fingerprint.

Cite this