0 avis
Systematic literature review on the application of explainable artificial intelligence in palliative care studies
Article indépendant
BACKGROUND: As machine learning models become increasingly prevalent in palliative care, explainability has become a critical factor in their successful deployment in this sensitive field, where decisions can profoundly impact patient health and quality of life. To address these concerns, Explainable AI (XAI) aims to make complex AI models more understandable and trustworthy.
OBJECTIVE: This study aims to assess the current state of machine learning models in palliative care, specifically focusing on their compliance with the principles of XAI.
METHODS: A comprehensive literature search in four databases was conducted to identify articles on machine learning in palliative care studies published until May 2024, followed by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guideline. The Checklist for Assessment of Medical Artificial Intelligence was used to evaluate the quality of the studies.
RESULTS: Mortality and survival prediction were the primary focus areas in 15 (54%) of the included 28 studies. Regarding data explainability, 20 studies (71%) documented their data preprocessing methods. However, a notable concern is that 45% of the studies did not address handling missing data. Across these studies, 74 machine learning algorithms were employed. Complex models, including Random Forest, Support Vector Machines, Gradient Boosting Machines, and Deep Neural Networks, were predominantly used (64%) due to their high predictive power, achieving AUC values between 0.82 and 0.96. Post-hoc explanation techniques were applied in only 11 studies, using seven different XAI techniques, focusing on global explanations to enhance understanding of model behavior.
CONCLUSION: Given the critical role of AI-driven decisions in patient care, adopting XAI techniques is essential for fostering trust and usability. Although progress has been made, significant gaps persist. A main challenge remains the trade-off between model performance and interpretability, as highly accurate models often lack the transparency required to build trust in clinical settings. Additionally, complex models frequently provide inadequate explanations for their outputs, lack consistent documentation, and have limited XAI applications, reducing the interpretability of machine learning studies for clinicians and decision-makers.
http://dx.doi.org/10.1016/j.ijmedinf.2025.105914
Voir la revue «International journal of medical informatics, 200»
Autres numéros de la revue «International journal of medical informatics»