What is it about?
Our work focuses on improving early breast cancer detection by creating realistic “synthetic” mammogram images. In real clinics, radiologists compare a woman’s past breast images with her current ones to look for small changes that might indicate cancer. However, researchers often do not have enough of these paired images to train advanced AI systems. To address this problem, we developed a new computer model that can simulate how a tumor might appear in a current mammogram by learning from both a previous exam and the patient’s current healthy image. The model uses modern AI techniques to create highly realistic tumor appearances and blend them naturally into the breast tissue. These synthetic images look very similar to real cancer images and can help researchers train better AI tools—even when real longitudinal data is limited. Ultimately, this technology aims to support earlier and more accurate breast cancer detection by giving AI systems more of the data they need to learn effectively.
Featured Image
Photo by Bermix Studio on Unsplash
Why is it important?
Example 1 This work is important because it solves a major data limitation in breast cancer research. AI models usually need many paired prior–current mammograms to learn how tumors develop, but such data is extremely rare. Our method creates realistic synthetic cancer cases that mimic true progression, allowing researchers to train stronger and more reliable detection systems. By filling this critical data gap, the work can directly support earlier diagnosis and improve the accuracy of clinical decision-making tools. Example 2 The uniqueness of this study comes from generating cancer directly from a patient’s past and current healthy images—something no previous method has done. This not only produces highly realistic examples for training AI but also reflects the real clinical workflow where radiologists compare past exams. As a result, our approach enables the development of next-generation AI models that better understand subtle temporal changes, potentially leading to faster, more accurate identification of breast cancer and improved patient outcomes.
Perspectives
Example 1 From my perspective, this work reflects an important step toward making AI systems more aligned with how radiologists actually think. I have always believed that temporal information—how a breast changes over time—is essential for early cancer detection, yet it is rarely incorporated into AI models because of data limitations. Developing a method that can realistically simulate tumor progression felt meaningful to me, not only as a researcher but also as someone motivated by improving patient care. I see this work as a foundation for future tools that can learn from richer, more clinically relevant data. Example 2 What stands out to me personally about this project is how it bridges a real gap between clinical practice and AI research. Radiologists rely heavily on comparing prior and current mammograms, but most AI models never see that type of information. Creating a framework that generates longitudinal cancer cases allowed me to explore this challenge creatively and technically. It was rewarding to see that the synthetic images we produced could actually help improve downstream detection models. For me, this work represents both an innovation in methodology and a meaningful contribution toward more reliable breast cancer diagnostics.
Afsana Ahsan Jeny
University of Connecticut
Read the Original
This page is a summary of: Longitudinal Tumor Generation in Mammograms via Dual Encoder GAN and Learnable Blending, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3765612.3767232.
You can read the full text:
Contributors
The following have contributed to this page







