The growing use of Autonomous Agents (AAs) in both private and public sectors raises crucial questions about trust. As AI systems take on increasingly complex tasks and decisions, their interactions with human agents (HAs) raise questions about the relevance and applicability of traditional philosophical concepts of trust and trustworthiness (sections 1 and 2). In this paper, I will explore the nuances of trust in AAs, arguing against both the complete dismissal of trust as misplaced (section 4) and the application of “genuine” trust frameworks (section 5). My aim is to lay the groundwork for the understanding that the moral complexity of interactions with AAs goes beyond the mere reliance we place on inanimate objects (section 6).

Trust, Trustworthiness and the Moral Dimension in human-AI Interactions

Donatella Donati
2025-01-01

Abstract

The growing use of Autonomous Agents (AAs) in both private and public sectors raises crucial questions about trust. As AI systems take on increasingly complex tasks and decisions, their interactions with human agents (HAs) raise questions about the relevance and applicability of traditional philosophical concepts of trust and trustworthiness (sections 1 and 2). In this paper, I will explore the nuances of trust in AAs, arguing against both the complete dismissal of trust as misplaced (section 4) and the application of “genuine” trust frameworks (section 5). My aim is to lay the groundwork for the understanding that the moral complexity of interactions with AAs goes beyond the mere reliance we place on inanimate objects (section 6).
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/254679
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact