What is it about?

When part of a photo is blurred because something moved while the picture was taken, most algorithms still edit the entire photo, wasting time and sometimes ruining sharp areas. Our method first learns which pixels are actually blurred, then only edits those pixels. It also figures out how each pixel moved during the photo shoot and uses that “motion trail” to fix the blur more accurately. The result: clearer images, less computing power, and longer battery life on your phone or camera.

Featured Image

Why is it important?

We show that a photo can be deblurred only where pixels actually moved during the exposure, letting phones skip the rest. This means local-motion blur can be removed in real time without burst photos or extra hardware—something current patch-wise methods cannot do. The key insight is that a single frame still carries the motion trail, and we teach the network to read and use it on the fly.

Perspectives

As someone who constantly takes blurry phone photos of a hyperactive dog, this project felt personal. I wanted a deblurring trick that wouldn’t drain my battery or demand a DSLR. Marrying “where is the blur?” with “how did each pixel move?” turned out to be the sweet spot—simple masks plus lightweight convolutions, no gigantic transformer. My hope now is to see it ship inside next-gen smartphones so everyone’s casual snapshots come out crisp—no retake required.

Wei Shang
Harbin Institute of Technology

Read the Original

This page is a summary of: Motion-Aware Adaptive Pixel Pruning for Efficient Local Motion Deblurring, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3746027.3755062.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page