Samenvatting
The question what moral status to ascribe to artificially intelligent entities, such as social robots, is at the forefront of academic debate. The related question as to how social AI affects the moral status we can ascribe to ourselves as human beings, by contrast, has received far less attention. Philosophers who discuss this issue propose a relational turn: human moral status should not be understood to be based on uniquely or typically human traits or capacities but derives from interpersonal interactions and practices through which people come to respect and morally value others. The aim of this article is to set out how developments in the field of social AI pose a problem not only to traits-based accounts of (human) moral status, but also to the relational approach. Our ways of morally relating to others are often mistaken and social AI heightens this risk, as it is purposefully designed to evoke moral responses in humans by displaying traits and mirroring forms of interaction that we would ascribe moral relevance in relations with fellow humans or (non-human) animals. If we cannot take the moral appearance of interactions with social AI at face value, we need further normative grounds to distinguish relations in which moral status is properly ascribed from those where it is not.
| Originele taal-2 | Engels |
|---|---|
| Tijdschrift | AI and Society |
| DOI's | |
| Status | Gepubliceerd - 22 okt. 2025 |
Thema's uit UvH's onderzoeksagenda
- Kritisch humanisme in de 21e eeuw
Vingerafdruk
Duik in de onderzoeksthema's van 'Dangerous liaisons: Social AI and the problem with the relational turn to moral status'. Samen vormen ze een unieke vingerafdruk.Citeer dit
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver