What is it about?

Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can fit the target distribution without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs’ training, such as non-convergence, mode collapses, gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training.

Featured Image

Why is it important?

First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the applications of regularization and normalization techniques that have been frequently employed in state-of-the-art GANs. Finally, we highlight potential future directions of research in this domain.

Read the Original

This page is a summary of: A Systematic Survey of Regularization and Normalization in GANs, ACM Computing Surveys, November 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3569928.
You can read the full text:

Read

Contributors

The following have contributed to this page