What is it about?

This work introduces DeepHQ, a new deep-learning-based image compression method that allows a single compressed file to be decoded at multiple quality levels. Instead of sending separate compressed images for different qualities, DeepHQ creates one bitstream that can be gradually decoded—from a quick preview to full detail. Traditional learned image compression models focus on achieving the best quality level. Progressive image coding methods exist, but current approaches are inefficient: they use fixed, hand-crafted quantization rules and transmit every piece of compressed information at every step, even when some of it is unnecessary. DeepHQ solves these problems by learning how much information to keep and transmit at each stage of decoding. It determines the optimal quantization intervals (compression precision) for each progressive layer, instead of using fixed rules. It also decides which parts of the compressed representation are actually important for each quality level, skipping unnecessary details in earlier stages. As a result, DeepHQ: * Saves more bits (better compression efficiency), * Requires less computation during decoding (faster), * Uses a much smaller model. DeepHQ achieves better image quality with fewer bits compared to previous state-of-the-art progressive compression models. It also reduces model size and decoding time significantly, making it practical for real-world applications such as cloud image delivery, remote sensing, media streaming, and edge devices with limited computation power. In short, DeepHQ enables one compressed file that works for everyone—fast preview first, full detail later—while reducing file size and processing cost.

Featured Image

Why is it important?

Progressive image delivery is essential when: * Network bandwidth is limited or unstable, * Users need to preview an image before full downloading quickly, * Devices have limited computing power (e.g., mobile, IoT, edge devices). Existing progressive learned codecs either require multiple models, large memory, or waste bits due to inefficient quantization. DeepHQ is the first method that learns how to progressively compress images efficiently, resulting in: * Higher compression efficiency (up to ~12% savings on average) * Only ~14% of the model size of competing methods * Up to 90% faster decoding This makes DeepHQ practical not only for research, but also for real deployment scenarios.

Perspectives

When developing this model, our goal was not only to improve rate–distortion performance, but also to make progressive compression practical. We repeatedly saw that previous models worked well in theory, yet were too heavy and slow in real usage. The most rewarding outcome of our work was seeing that we could reduce the model size dramatically while improving quality at the same time. We believe DeepHQ will contribute to making learned progressive coding usable in real products—especially where flexibility and efficiency are crucial.

JOOYOUNG LEE
Electronics an Telecommunications Research Institute

Read the Original

This page is a summary of: DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding, ACM Transactions on Multimedia Computing Communications and Applications, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3773994.
You can read the full text:

Read

Contributors

The following have contributed to this page