What is it about?

Glioblastoma is an aggressive brain tumor where rapid diagnosis is critical for survival. Currently, specialists must manually outline the tumor on MRI scans, a tedious process taking about 60 minutes per patient that delays treatment. We developed an automated artificial intelligence system to perform this segmentation instantly. Instead of relying on a single AI model, we combined three different advanced models into a team. We used a special fuzzy logic-based voting system to weigh the confidence of each model, ensuring the final outline is more accurate than any single model could achieve alone. This tool aims to reduce diagnosis time, making critical treatment planning faster and more accessible, especially in regions with fewer medical experts.

Featured Image

Why is it important?

This work is the first to extend a fuzzy rank-based ensemble method—previously applied only to 2D image classification—to the challenging task of 3D brain tumor segmentation in MRI scans. Unlike conventional ensembles that use fixed averaging or voting, our approach dynamically weighs predictions from three state-of-the-art models (SegResNet, UNETR, SwinUNETR) using two nonlinear functions that account for both prediction confidence and deviation from target classes. This allows the system to adaptively correct individual model errors without assigning static weights. The timing is critical: glioblastoma survival rates have remained unchanged for decades, manual tumor segmentation still requires ~60 minutes per patient, and there is a growing global shortage of qualified neuroradiologists, particularly in developing regions. By delivering a resource-conscious solution (trained within 10 hours on standard GPUs) with open-source pipelines, our method offers a reproducible, accessible tool that can accelerate diagnosis and treatment planning. The statistically significant improvement in Dice score demonstrates that fuzzy ensemble fusion can meaningfully enhance segmentation reliability—potentially enabling more precise surgical planning and broader deployment of AI-assisted diagnostics in clinical settings where computational resources are limited.

Perspectives

Working on this paper was personally meaningful because it sits at the intersection of two challenges I care deeply about: advancing ensemble methods in deep learning, and making AI tools genuinely accessible for clinical settings with limited resources. Early in the project, we asked ourselves: "Can we build a high-performing segmentation system that doesn't require massive computational budgets or proprietary datasets?" That constraint—often seen as a limitation—became our guiding principle. What surprised me most was how the fuzzy rank-based approach, which we initially explored for 2D classification tasks, translated so effectively to the far more complex 3D segmentation problem. Watching the ensemble correct subtle boundary errors that individual models missed—especially in the enhancing tumor region, where precision matters most for surgical planning—was a rewarding validation of the method's adaptability. Collaborating across institutions (LETI, Jadavpur University) brought diverse perspectives that strengthened the work: from algorithmic design to clinical relevance. I hope this publication encourages other researchers to view resource constraints not as barriers, but as catalysts for innovation. If this method helps even one clinical team accelerate diagnosis or expand access to AI-assisted MRI analysis in an underserved region, then the effort was worth it. Finally, I hope readers find the open-source pipelines useful—not just as a benchmark, but as a starting point for their own adaptations. Medical AI advances fastest when we build together.

Dr. Aleksandr Sinitca
Sankt-Peterburgskij gosudarstvennyj elektrotehniceskij universitet LETI

Read the Original

This page is a summary of: A fuzzy rank-based ensemble of CNN models for MRI segmentation, Biomedical Signal Processing and Control, April 2025, Elsevier,
DOI: 10.1016/j.bspc.2024.107342.
You can read the full text:

Read

Contributors

The following have contributed to this page