Explaining Graph Neural Network Predictions for Drug Repurposing

Loading...

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Open Access Color

OpenAIRE Downloads

OpenAIRE Views

relationships.isProjectOf

relationships.isJournalIssueOf

Abstract

Graph Neural Networks (GNNs) are powerful tools for graph-related tasks, excelling in progressing graph-structured data while maintaining permutation invariance. However, their challenge lies in the obscurity of new node representations, hindering interpretability. This paper introduces a framework addressing this limitation by explaining GNN predictions. The proposed method takes any GNN prediction, for which it returns a concise subgraph as explanation. Utilizing Saliency Maps, an attribution gradient-based technique, we enhance interpretability by assigning importance scores to entities withing the knowledge graph via backpropagation. Evaluated on the Drug Repurposing Knowledge Graph, Graph Attention Network achieved a Hits@5 score of 0.451 and a Hits@10 score of 0.672. GraphSAGE demonstrated notable results with the highest recall rate of 0.992. Our framework underscores GNN efficacy and interpretability, which is crucial in complex scenarios like drug repurposing. Illustrated through an Alzheimer’s disease case study, our approach provides meaningful and comprehensible explanations for GNN predictions. This work contributes to advancing the transparency and utility of GNNs in real-world applications. © 2024 Copyright for this paper by its authors.

Description

Keywords

Alzheimer’S Disease, Drug Repurposing, Explainable Ai (Xai), Graph Neural Networks (Gnns), Knowledge Graphs (Kgs), Saliency Maps

Fields of Science

Citation

WoS Q

Scopus Q

Volume

3890

Issue

Start Page

46

End Page

55
Page Views

53

checked on May 04, 2026

Google Scholar Logo
Google Scholar™

Sustainable Development Goals

SDG data is not available