What is it about?

A problem with many machine learning models is that they are black-boxes, meaning that it is not always straightforward for humans to understand exactly how the input parameters of the model impact the output. In this work, we use an alternative model architecture that in contrast to black-box models is fully human-understandable. We apply this method to a case in fusion research where we investigate how the growth rate of a plasma instability is impacted by different parameters. With this method, we are able to identify several parameter patterns that are not easily found with simpler approaches.

Featured Image

Why is it important?

As machine learning / AI methods become more prevalent in both research and society, it is important that we make efforts to better understand how our models motivate their predictions. This is important not only for trusting the models and ensuring that they behave desirably, but also for gaining insight about the data the models are trained on.

Perspectives

Writing this article was great fun, in particular since much of the work was based on a project performed by excellent student at our university. It feels good that the next-generation of engineers/researchers are interested in the topic of making AI more human-understandable and transparent, and therefore more safe.

Andreas Gillgren
Chalmers tekniska hogskola

Read the Original

This page is a summary of: Investigating characteristics of the growth rates from QuaLiKiz using an interpretable surrogate model, Physics of Plasmas, May 2025, American Institute of Physics,
DOI: 10.1063/5.0261456.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page