What is it about?

Training modern neural networks usually relies on “backpropagation”, a method that needs both forward and backward passes and consumes a lot of memory and energy. This becomes a bottleneck when practitioners want to run and train AI models directly on low‑power chips, such as those in mobile or edge devices. This work introduces NoiseZO, a new way to train neural networks that only requires forward passes. Instead of computing precise gradients, it uses the natural random noise that occurs in resistive random‑access memory (RRAM) hardware as a source of perturbations. These perturbations allow the model to estimate how its parameters should change, following the idea of zeroth‑order optimization, but without adding extra randomness in software. The authors design a complete “forward‑only” training framework that maps neural network operations to RRAM crossbar arrays, carefully models and leverages their device‑level noise, and shows how to update model parameters using just the observed outputs. They evaluate NoiseZO on standard learning tasks and demonstrate that it can achieve competitive accuracy compared with traditional backpropagation, while greatly reducing memory and computation requirements on RRAM‑based accelerators.

Featured Image

Why is it important?

As AI systems move from large data centers to small, power‑constrained devices, the conventional way of training neural networks becomes too heavy. Backpropagation needs to store many intermediate values and perform extra computations, which limits on‑chip learning and continual adaptation at the edge. NoiseZO is important because it turns an unavoidable hardware drawback—device noise in RRAM—into a useful feature for learning. By using this noise to drive zeroth‑order optimization, the method eliminates the backward pass and enables efficient, forward‑only training that fits the strengths and limitations of RRAM hardware. This approach opens a path toward AI chips that can both run and update models locally with much lower energy and memory cost. Such capability is crucial for privacy‑preserving, always‑on, and adaptive applications, where devices must learn from new data in real time without relying on the cloud.

Perspectives

From my personal point of view, the key idea of this work is to show that we do not always need backpropagation to train useful neural networks. While developing NoiseZO, I found it very interesting that the “unwanted” noise in RRAM devices can actually help learning instead of hurting it. This project convinced me that algorithms and hardware should be designed together, not separately. I hope this forward‑only, hardware‑aware approach can inspire new training methods for future edge AI chips.

Shuqi Wang
University of Hong Kong

Read the Original

This page is a summary of: NoiseZO: RRAM Noise-Driven Zeroth-Order Optimization for Efficient Forward-Only Training, June 2025, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/dac63849.2025.11132557.
You can read the full text:

Read

Contributors

The following have contributed to this page