What is it about?
When scientists study cells or tissues under a microscope, they often need to distinguish between different areas—like healthy versus healing tissue, or cells versus empty space. Traditionally, this requires adding special dyes or stains to make features visible, which can be time-consuming, expensive, or even harmful to living samples. We developed a simple, free tool that analyzes microscope images by looking at their natural texture and patterns—without any staining needed. Think of it like recognizing a forest from satellite imagery by the density of tree edges, rather than by color. Our method counts how many fine structural details appear in each part of an image, then uses that information to separate regions of interest. We tested this approach on two common tasks: measuring how quickly cells close a wound in a lab dish, and distinguishing healed from original tissue in microscope slides. In both cases, our tool matched expert manual analysis with over 95% accuracy. Because it requires no training data, runs on ordinary computers, and lets users adjust settings with instant visual feedback, it offers a practical, accessible alternative to complex artificial intelligence methods — helping researchers get reliable results faster and with fewer experimental constraints.
Featured Image
Photo by Bioscience Image Library by Fayette Reynolds on Unsplash
Why is it important?
What makes our work unique is that it delivers deep learning–competitive segmentation accuracy (95–99%) without requiring training data, GPU resources, or specialized staining protocols. At a time when biomedical imaging increasingly relies on complex, black-box AI models, we offer a transparent, interpretable alternative: local edge density as a surrogate image channel that quantifies tissue "patchiness" directly from standard brightfield or phase-contrast microscopy. Two significant findings are that: a) edge density correlates with expert manual assessments at ρ > 0.97 across diverse imaging conditions, and b) this simple metric can distinguish native from regenerated tissue in histological sections with nearly three-fold contrast—even when using routine hematoxylin–eosin staining instead of specialized collagen-targeted protocols. By providing an open-source, GUI-based tool (BCAnalyzer) that enables real-time, interactive adjustment by domain experts without coding expertise, our approach lowers barriers to rigorous image quantification, reduces experimental costs by minimizing reliance on fluorescent markers, and supports reproducible, explainable analysis in both resource-limited and high-throughput research settings.
Perspectives
Writing this paper was personally rewarding because it represents a deliberate step back from the prevailing trend toward ever-more-complex deep learning solutions. Early in this project, I found myself repeatedly hearing from biologists and clinicians that they needed usable tools—not black-box models that required months of annotation work or computational resources they didn't have. That feedback shaped our core philosophy: simplicity, transparency, and accessibility first. What excites me most is that a concept as intuitive as "counting edges in a small window" turned out to be so powerful. It feels almost obvious in hindsight, yet it reliably matches expert manual assessments and competes with transformer-based architectures on real-world tasks. This reaffirms my belief that elegant, interpretable methods still have tremendous value—especially in biomedical contexts where domain experts need to understand, trust, and adjust the analysis in real time. I also hope this work encourages more collaboration between computer vision researchers and experimental biologists. By releasing BCAnalyzer as a free, open-source tool with a graphical interface, we aimed to lower the barrier for adoption. Already, I've received messages from labs using it for projects we hadn't even imagined—from biofilm quantification to plant tissue analysis. That organic spread is deeply gratifying. Finally, on a personal note: developing this algorithm taught me to appreciate the art of constraint. Limiting ourselves to two tunable parameters forced creative problem-solving and, paradoxically, led to a more robust and generalizable solution. I hope readers—whether seasoned image analysts or first-time users—find the tool as useful and empowering as we intended, and that it sparks new ideas about how classical computer vision can complement, rather than compete with, modern AI in biomedical discovery.
Dr. Aleksandr Sinitca
Sankt-Peterburgskij gosudarstvennyj elektrotehniceskij universitet LETI
Read the Original
This page is a summary of: Segmentation of patchy areas in biomedical images based on local edge density estimation, Biomedical Signal Processing and Control, January 2023, Elsevier,
DOI: 10.1016/j.bspc.2022.104189.
You can read the full text:
Contributors
The following have contributed to this page







