What is it about?

Using past patients data, neural networks can be trained to classify cancers, and once the training process ends, the neural networks can be used to diagnose new patients. However, most neural networks cannot explain how they arrive at their decisions. This work attempts to give intuitive explanation to the neural networks decision making.

Featured Image

Why is it important?

The ability of neural networks to give explanations on their decisions will be important for their reliability and usability in real world clinical settings.

Perspectives

This work adds reliability and usability to neural networks as diagnostic tools.

Pitoyo Hartono
Chukyo University

Read the Original

This page is a summary of: A transparent cancer classifier, Health Informatics Journal, December 2018, SAGE Publications,
DOI: 10.1177/1460458218817800.
You can read the full text:

Read

Contributors

The following have contributed to this page