Autonomous Intelligent Agents are employed in many important autonomous applications upon which the life and welfare of living beings and vital social functions may depend. Therefore, agents should be trustworthy. Apriori certification techniques can be useful, but are not sufficient for agents that evolve, and thus modify their epistemic and belief state. In this paper we propose/ refine/extend techniques for run-time assurance, based upon introspective self-monitoring and checking. The aim is to build a 'toolkit' to allow an agent designer/developer to ensure trustworthy and ethical behavior.

Ensuring trustworthy and ethical behavior in intelligent logical agents?

Costantini S.
2020-01-01

Abstract

Autonomous Intelligent Agents are employed in many important autonomous applications upon which the life and welfare of living beings and vital social functions may depend. Therefore, agents should be trustworthy. Apriori certification techniques can be useful, but are not sufficient for agents that evolve, and thus modify their epistemic and belief state. In this paper we propose/ refine/extend techniques for run-time assurance, based upon introspective self-monitoring and checking. The aim is to build a 'toolkit' to allow an agent designer/developer to ensure trustworthy and ethical behavior.
File in questo prodotto:
File Dimensione Formato  
CILC2020-Stefania-Trustworthy.pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Dominio pubblico
Dimensione 263.92 kB
Formato Adobe PDF
263.92 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/160528
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact