What is it about?

Imagine a future where quantum computers supercharge machine learning - training models in seconds, extracting insights from massive datasets and powering next-gen AI. That future might be closer than you think, thanks to a breakthrough from researchers at Australia’s national research agency, CSIRO, and The University of Melbourne. Until now, one big roadblock stood in the way: errors. Quantum processors are noisy, and quantum machine learning (QML) models need deep circuits with hundreds of gates. Even tiny errors pile up fast, wrecking accuracy. The usual fix - quantum error correction – may work, but it’s expensive. We’re talking millions of qubits just to run one model. That’s way beyond today’s hardware. So, what’s the game-changer? The team discovered that you don’t need to correct everything. In QML models, more than half the gates are trainable and they adjust during learning. By skipping error correction for these gates, the model can ‘self-correct’ as it trains. The result? Accuracy almost as good as full error correction, but with only a few thousand qubits instead of millions. Lead author and PhD student at The University of Melbourne, Haiyue Kang, describes this work as an important step forward. “Until now, quantum machine learning has mostly been tested in perfect, error-free simulations. But real quantum computers aren’t perfect - they’re noisy, and that noise makes today’s hardware incompatible with these models. In other words, there’s a big gap between the theory and actually running QML on quantum processors without losing accuracy.” Professor Muhammad Usman, Head of the Quantum Systems team at CSIRO, is senior author of the study. “This is a paradigm shift,” Professor Usman said. “We’ve shown that partial error correction is enough to make QML practical on the quantum processors expected to be available in the near future.” Why does this matter? Because it could move quantum machine learning from theory to reality much sooner than expected. Faster training, smarter AI and real-world quantum advantage could now be within reach. The study, published in Quantum Science and Technology, marks a major milestone for quantum computing and AI. It’s not just a technical tweak - it’s a rethink of how we build quantum algorithms for noisy hardware.

Featured Image

Why is it important?

Bottom line: Quantum machine learning mightn’t be decades away. Thanks to this clever approach, it could be powering real-world applications in the near future.

Read the Original

This page is a summary of: Almost fault-tolerant quantum machine learning with drastic overhead reduction, Quantum Science and Technology, December 2025, Institute of Physics Publishing,
DOI: 10.1088/2058-9565/ae2157.
You can read the full text:

Read

Contributors

The following have contributed to this page