What is it about?
AWMA-MoE is a framework that enhances the quality of generated images while preserving strong watermark robustness. Specifically, an attention-based adapter is designed that adaptively embeds watermarks with spatially varying strengths across image regions. Building upon this, we introduce an MoE architecture that leverages diverse experts to further improve image quality while retaining watermark robustness.
Featured Image
Photo by Erika Fletcher on Unsplash
Read the Original
This page is a summary of: AWMA-MoE: Attention-Guided Watermark Adapter with MoE for Latent Diffusion Models, April 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3774904.3792903.
You can read the full text:
Contributors
The following have contributed to this page







