What is it about?

Current AI in medicine often acts like a "black box"—it might give the right answer, but doctors don't know why or if it's looking at the right things. Our research creates a more trustworthy AI for reading chest X-rays images and classify them giving logic and visual reasoning.

Featured Image

Why is it important?

This research bridges the gap between computer science and bedside medicine. By making AI transparent and forcing it to follow anatomical rules, we ’ve created a system that doctors are more likely to trust and use, ultimately making AI-assisted diagnosis safer and more reliable for patients.

Perspectives

The proposed system employs a dual-path model: an enhanced EfficientNetV2 backbone extracts hierarchical local features, whereas a refined Vision Transformer captures global contextual dependencies across the thoracic cavity. These representations are fused and critically disciplined through auxiliary segmentation supervision using CheXmask. This anchors the learned features to lung and cardiac anatomy, reducing reliance on spurious artifacts. This anatomical basis is fundamental to the interpretability pipeline. It confines Gradient-weighted Class Activation Mapping (Grad-CAM) visual explanations to clinically valid regions. Then, a novel neuro-symbolic reasoning layer is introduced. Using a fuzzy logic engine and radiological ontology, this module translates anatomically aligned neural activations into structured, human-readable diagnostic statements that explicitly articulate the model’s clinical rationale.

Professor Gibrael Abo Samra
King Abdulaziz University

Read the Original

This page is a summary of: Anatomy-Guided Hybrid CNN–ViT Model with Neuro-Symbolic Reasoning for Early Diagnosis of Thoracic Diseases Multilabel, Diagnostics, January 2026, MDPI AG,
DOI: 10.3390/diagnostics16010159.
You can read the full text:

Read

Contributors

The following have contributed to this page