What is it about?

Applying deep learning (DL), Explainable artificial intelligence(XAI) and advancing towards the human-computer interface(HCI) model can be a leap forward in medical research. This research aims to propose a robust explainable HCI model using shapley additive explanation (SHAP), local interpretable model-agnostic explanations (LIME) and DL algorithms. The use of DL algorithms: logistic regression(80.87%), support vector machine (85.8%), k-nearest neighbour(87.24%), multilayer perceptron(91.94%), decision tree(100%) and explainability can help exploring untapped avenues for research in medical sciences that can mould the future of HCI models. The outcomes of the proposed model depict higher prediction accuracy bringing efficient computer interface in decision making, and suggests a high level of relevance in the field of medical and clinical research.

Featured Image

Why is it important?

Directing research on Alzheimer’s towards only early prediction and accuracy cannot be considered a feasible approach towards tackling a ubiquitous degenerative disease today. Applying deep learning (DL), Explainable artificial intelligence(XAI) and advancing towards the human-computer interface(HCI) model can be a leap forward in medical research.

Perspectives

The explainability of algorithm driven prediction is utmost required in today's computer vision era. The outcomes of the proposed model depict higher prediction accuracy bringing efficient computer interface in decision making, and suggests a high level of relevance in the field of medical and clinical research.

Dr Loveleen Gaur
Amity International Business School

Read the Original

This page is a summary of: Explanation-driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease, ACM Transactions on Multimedia Computing Communications and Applications, April 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3527174.
You can read the full text:

Read

Contributors

The following have contributed to this page