Methods for implementing Automated Reasoning in a fashion that is at least reminiscent of human cognition and behavior must refer (also) to Intelligent Agents. In fact, agent-based systems nowadays implement many important autonomous applications in critical contexts. Sometimes, life and welfare of living beings may depend upon these applications. In order to interact in a proper way with human beings and human environments, agents operating in critical contexts should be to some extent 'humanized': i.e., they should do what is expected of them, but perhaps more importantly they should not behave in improper/unethical ways. Ensuring ethical reliability can also help to improve the 'relationship' between humans and robots: in fact, despite the promise of immensely improving the quality of life, humans take an ambivalent stance in regard to autonomous systems, because we fear that autonomous systems may abuse of their power to take decisions not aligned with human values. To this aim, we propose techniques for introspective self-monitoring and checking.

Towards humanized ethical intelligent agents: The role of reflection and introspection

Costantini S.
;
Dyoub A.
;
Pitoni V.
2018-01-01

Abstract

Methods for implementing Automated Reasoning in a fashion that is at least reminiscent of human cognition and behavior must refer (also) to Intelligent Agents. In fact, agent-based systems nowadays implement many important autonomous applications in critical contexts. Sometimes, life and welfare of living beings may depend upon these applications. In order to interact in a proper way with human beings and human environments, agents operating in critical contexts should be to some extent 'humanized': i.e., they should do what is expected of them, but perhaps more importantly they should not behave in improper/unethical ways. Ensuring ethical reliability can also help to improve the 'relationship' between humans and robots: in fact, despite the promise of immensely improving the quality of life, humans take an ambivalent stance in regard to autonomous systems, because we fear that autonomous systems may abuse of their power to take decisions not aligned with human values. To this aim, we propose techniques for introspective self-monitoring and checking.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/151044
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact