What is it about?

This paper is about improving how we automatically choose and tune AI models that act as the “brains” of Digital Twins—computer replicas of real-world systems like buildings, machines, or energy networks. The authors show that common trial-and-error methods don’t work well for physics-based AI, because these models behave in predictable ways rather than randomly. They introduce a new method, BanditNAS, that learns more intelligently which model designs work best, especially when models improve slowly or need multiple training stages. The approach is tested across several scientific problems and shown to pick better models more reliably in many practical situations, helping Digital Twins run more accurately and efficiently in real time.

Featured Image

Why is it important?

This work is important because Digital Twins are increasingly used to control and optimise real-world systems in real time, such as energy infrastructure, industrial processes, and climate-sensitive technologies. In these settings, AI models must be both highly accurate and fast, and there is little room for wasted computing effort or poor model choices. Existing automated methods for choosing AI models treat the problem as largely random, which works reasonably well for many applications but breaks down for physics-based models that behave in predictable ways and often improve only late in training. What is unique and timely about this research is that it recognises this mismatch and proposes a new, more appropriate way to choose AI models that reflects how scientific and physics-informed models actually learn. By doing so, it can reliably identify better-performing models while using fewer computational resources. This matters as Digital Twins move from research prototypes into real-world deployment, where efficiency, reliability, and real-time performance are critical. The work helps practitioners choose the right optimisation strategy for the problem at hand, potentially accelerating adoption of Digital Twins in safety-critical and resource-constrained environments.

Perspectives

From our perspective, this work grew out of repeated frustration with optimisation methods that worked well in generic machine learning but consistently failed when applied to physics-based models. While developing and deploying Digital Twins, we saw firsthand how late-converging models, optimiser switching, and deterministic training data broke many standard assumptions. This paper was our opportunity to formalise those observations and propose a principled alternative grounded in both theory and practice. We hope this work helps bridge the gap between scientific modelling and modern AI optimisation, and gives practitioners clearer guidance on how to choose and tune models under real-world constraints, rather than relying on methods that were never designed for these settings.

Craig Bower
University of Leicester

Read the Original

This page is a summary of: Bandit Neural Architecture Search for Digital Twin Optimisation: A Scientific Machine Learning Approach, ACM Transactions on Autonomous and Adaptive Systems, January 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3786781.
You can read the full text:

Read

Contributors

The following have contributed to this page