Logic has been proved useful to model various aspects of the reasoning process of agents and Multi-Agent Systems (MAS). In this paper, we report about a line of work carried on in cooperation with Andrea Formisano (former Eugenio’s Ph.D. student) and Valentina Pitoni, to explore some social aspects of such systems. The aim is to formally model (aspects of ) the group dynamics of cooperative agents. We have proposed a particular logical framework (the Logic of “Inferable” L-DINF), where a group of cooperative agents can jointly perform actions. I.e., at least one agent of the group can perform the action, either with the approval of the group or on behalf of the group. We have been able to take into consideration actions’ cost, and the preferences that each agent may have for what concerns performing each action. Our focus is on: (i) explainability, i.e., the syntax of our logic is especially devised to make it possible to transpose a proof into a natural language explanation, in the perspective of trustworthy Artificial Intelligence (AI); (ii) the capability to construct and execute joint plans within a group of agents; (iii) the formalization of aspects of the Theory of Mind, which is an important social-cognitive skill that involves the ability to attribute mental states, including emotions, desires, beliefs, and knowledge both one’s own and those of others, and to reason about the practical consequences of such mental states; this capability is very relevant when agents have to interact with humans, and in particular in robotic applications; (iv) connection between theory and practice, so as to make our logic actually usable by systems’ designers. In this paper, we summarize our past work and propose some discussions, possible extensions and considerations.

Epistemic logics for modeling group dynamics of cooperative agents, and aspects of Theory of Mind

Costantini S.
2021-01-01

Abstract

Logic has been proved useful to model various aspects of the reasoning process of agents and Multi-Agent Systems (MAS). In this paper, we report about a line of work carried on in cooperation with Andrea Formisano (former Eugenio’s Ph.D. student) and Valentina Pitoni, to explore some social aspects of such systems. The aim is to formally model (aspects of ) the group dynamics of cooperative agents. We have proposed a particular logical framework (the Logic of “Inferable” L-DINF), where a group of cooperative agents can jointly perform actions. I.e., at least one agent of the group can perform the action, either with the approval of the group or on behalf of the group. We have been able to take into consideration actions’ cost, and the preferences that each agent may have for what concerns performing each action. Our focus is on: (i) explainability, i.e., the syntax of our logic is especially devised to make it possible to transpose a proof into a natural language explanation, in the perspective of trustworthy Artificial Intelligence (AI); (ii) the capability to construct and execute joint plans within a group of agents; (iii) the formalization of aspects of the Theory of Mind, which is an important social-cognitive skill that involves the ability to attribute mental states, including emotions, desires, beliefs, and knowledge both one’s own and those of others, and to reason about the practical consequences of such mental states; this capability is very relevant when agents have to interact with humans, and in particular in robotic applications; (iv) connection between theory and practice, so as to make our logic actually usable by systems’ designers. In this paper, we summarize our past work and propose some discussions, possible extensions and considerations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/200389
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact