What is it about?

Neural architecture search (NAS), the study of automating the discovery of optimal deep neural network architectures for tasks in domains such as computer vision and natural language processing, has seen rapid growth in the machine learning research community. We show how genetic algorithms can be paired with lightly trained objective predictors in an iterative cycle to accelerate multi-objective architectural exploration in the modalities of both machine translation and image classification.

Featured Image

Why is it important?

Most NAS research efforts have centered around the computer vision task of image classification and only recently have other modalities, such as the rapidly growing field of language modeling or language translation, been investigated in detail. However, understanding how NAS approaches generalize and perform across modalities and tasks has not been studied in depth.

Perspectives

The goal of the work was to demonstrate how GAs can be uniquely leveraged to accelerate multi-objective neural architecture search for the modalities of machine translation and image classification. The LINAS algorithm described in the paper offers a modular framework that can easily be modified to fit a variety of NAS application domains. As NAS research continues to gain momentum, we highlight the need to continue to investigate the generalizability of NAS approaches in modalities outside of computer vision.

Daniel Cummings
Intel Corp

Read the Original

This page is a summary of: Accelerating neural architecture exploration across modalities using genetic algorithms, July 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3520304.3528786.
You can read the full text:

Read

Contributors

The following have contributed to this page