What is it about?

The neural networks powering today's AI systems learn by changing their connection weights to make better and better predictions about their training data (“in-weight learning”). Through this training, these systems acquire an emergent ability to learn new tasks from only a few examples provided as inputs (“in-context learning”). In this work, we show that when these two fundamentally different kinds of learning interact with each other in a single neural network, it naturally reproduces key aspects of human learning, including a human-like ability to generalize to unseen inputs, a dependence on the learning curriculum, and a tradeoff between flexibility and retention.

Featured Image

Why is it important?

Our work offers a new perspective on traditional dual-systems theories of human cognition, and suggests that important computational principles may be shared between artificial intelligence and human cognition.

Perspectives

I think this work provides one example of how cognitive science and artificial intelligence can productively inform each other, and of how translating ideas across fields can cast them in a new light.

Jacob Russin
Brown University

Read the Original

This page is a summary of: Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning, Proceedings of the National Academy of Sciences, August 2025, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2510270122.
You can read the full text:

Read

Contributors

The following have contributed to this page