In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model’s decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.

Ensemble approaches for Graph Counterfactual Explanations

Prado-Romero M. A.;Stilo G.;Celi A.;Valdes-Perez D. A.
2022-01-01

Abstract

In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model’s decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/200283
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact