What is it about?

Deep Neural Networks (DNNs) are increasingly being used in software engineering and code intelligence tasks. These are powerful tools that are capable of learning highly generalizable patterns from large datasets through millions of parameters. At the same time, their large capacity can render them prone to memorizing data points. Recent work suggests that the memorization risk manifests especially strongly when the training dataset is noisy, involving many ambiguous or questionable samples, and memorization is the only recourse. The goal of this paper is to evaluate and compare the extent of memorization and generalization in neural code intelligence models. It aims to provide insights on how memorization may impact the learning behavior of neural models in code intelligence systems. To observe the extent of memorization in models, we add random noise to the original training dataset and use various metrics to quantify the impact of noise on various aspects of training and testing. We evaluate several state-of-the-art neural code intelligence models and benchmarks to study memorization effects in the domain of software engineering.

Featured Image

Why is it important?

The tremendous capacity – now spanning many billions of trainable parameters – allows neural networks to both learn many generalizable patterns and simply memorize myriad training samples. However, memorization is a significant, and non-obvious threat to training deep learners, perhaps especially so for models trained on software engineering data. Source code from the open-source ecosystem is exceptionally repetitive, as well as particularly noisy. Both factors encourage memorization, which is directly adverse to the ability of neural models to generalize. Therefore, like other domains, it is equally important to perform a large-scale study of memorization and generalization in training neural models for code intelligence tasks.

Perspectives

This work raises awareness and provides new insights into important issues of training neural models in code intelligence systems that are usually overlooked by software engineering researchers. Our results highlight that millions of trainable parameters allow neural networks to memorize noisy data, and provide a false sense of generalization. We observed all models manifest some forms of memorization that can be potentially troublesome in most code intelligence tasks where they rely on rather noise-prone and repetitive data sources, such as code from GitHub.

Md Rafiqul Islam Rabin
University of Houston

Read the Original

This page is a summary of: Memorization and generalization in neural code intelligence models, Information and Software Technology, September 2022, Elsevier,
DOI: 10.1016/j.infsof.2022.107066.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page