What is it about?
This paper is a tutorial and survey on predictive coding networks (PCNs) — neural networks inspired by how the brain processes information. Rather than passively receiving input, the brain is thought to constantly makes predictions and only pass along the "surprises." PCNs work the same way, using a training algorithm called inference learning instead of the standard backpropagation. The paper walks through the mathematics, connections to familiar ML methods, and shows that PCNs are a more general class of neural network than standard ones. An open-source Python library accompanies the paper for hands-on experimentation.
Featured Image
Photo by Bhautik Patel on Unsplash
Why is it important?
Despite remarkable recent AI progress, biological brains still outperform machines in energy efficiency and adaptability. This paper arrives amid surging interest in NeuroAI and fills a real gap: a rigorous yet accessible mathematical introduction to PCNs for ML practitioners, clarifying connections to backpropagation, VAEs, and diffusion models.
Perspectives
The PC literature can be difficult to navigate — different communities use different framings. We wanted to create the resource we wished had existed when we started. Seeing how PCNs are deeply connected to a broad range of widely used techniques in ML -- graphical models, VAEs, diffusion models, Boltzmann machines, factor analysis, probabilistic PCA -- was the most rewarding part of writing this, and we hope it opens the field up to a broader ML audience.
Björn van Zwol
Universiteit Leiden
Read the Original
This page is a summary of: Predictive Coding Networks and Inference Learning: Tutorial and Survey, ACM Computing Surveys, February 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3797870.
You can read the full text:
Resources
Contributors
The following have contributed to this page







