dc.description.abstract |
Stroke presents a formidable global health threat, carrying significant
risks and challenges. Timely intervention and improved outcomes hinge on the
integration of Explainable Artificial Intelligence (XAI) into medical decision making. XAI, an evolving field, enhances the transparency of conventional
Artificial Intelligence (AI) models. This systematic review addresses key research
questions: How is XAI applied in the context of stroke diagnosis? To what extent can
XAI elucidate the outputs of machine learning models? Which systematic evalua tion methodologies are employed, and what categories of explainable approaches
(Model Explanation, Outcome Explanation, Model Inspection) are prevalent We
conducted this review following the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses (PRISMA) guidelines. Our search encompassed five
databases: Google Scholar, PubMed, IEEE Xplore, ScienceDirect, and Scopus, span ning studies published between January 1988 and June 2023. Various combinations
of search terms, including “stroke,” “explainable,” “interpretable,” “machine learn ing,” “artificial intelligence,” and “XAI,” were employed. This study identified 17
primary studies employing explainable machine learning techniques for stroke diagnosis. Among these studies, 94.1% incorporated XAI for model visualization,
and 47.06% employed model inspection. It is noteworthy that none of the studies
employed evaluation metrics such as D, R, F, or S to assess the performance of their
XAI systems. Furthermore, none evaluated human confidence in utilizing XAI for
stroke diagnosis. Explainable Artificial Intelligence serves as a vital tool in enhan cing trust among both patients and healthcare providers in the diagnostic process.
The effective implementation of systematic evaluation metrics is crucial for harnes sing the potential of XAI in improving stroke diagnosis |
en_US |