What is it about?

We investigate deep learning for video compressive sensing within the scope of snapshot compressive imaging (SCI). In video SCI, multiple high-speed frames are modulated by different coding patterns and then a low-speed detector captures the integration of these modulated frames. We build a video SCI system using a digital micromirror device and develop both an end-to-end convolutional neural network (E2E-CNN) and a Plug-and-Play (PnP) framework with deep denoising priors to solve the inverse problem.

Featured Image

Why is it important?

We compare them with the iterative baseline algorithm GAP-TV and the state-of-the-art DeSCI on real data. Given a determined setup, a well-trained E2E-CNN can provide video-rate high-quality reconstruction. The PnP deep denoising method can generate decent results without task-specific pre-training and is faster than conventional iterative algorithms. Considering speed, accuracy, and flexibility, the PnP deep denoising method may serve as a baseline in video SCI reconstruction

Perspectives

Existing surveillance cameras consume too much memory and bandwidth. If the video SCI cameras are deployed for traffic surveillance, our method would allow for quick and easy recovery of the data captured by the SCI cameras

Xin Yuan
Bell Labs

Read the Original

This page is a summary of: Deep learning for video compressive sensing, APL Photonics, March 2020, American Institute of Physics,
DOI: 10.1063/1.5140721.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page