What is it about?

Artificial Intelligence (AI) is gradually changing medical practice and is already used in major disease areas e.g., cancer, neurology, and cardiology. There are also concerns on safety and dangers of AI, one major concern is that AI is a ‘black box’ and cannot see the learning process, therefore, we don’t really know when the AI has some problem. Healthcare organisations also lack the data infrastructure required to collect the data needed to optimally train algorithms.

Featured Image

Why is it important?

To resolve the safety concern, we need to constructs and test two scenarios for applying cybersecurity with autonomous artificial intelligence. (1) self-optimising predictive cyber risk analytics of failures in healthcare systems, and (2) self-adaptive forecasting of medical production and supply chain bottlenecks e.g., during future pandemics. To construct the two testing scenarios, we can synthesise data from the Covid-19 pandemic.


The volume of data generated from edge devices creates diverse challenges in developing data strategies for training AI algorithms. Designing sparse compact and efficient AI algorithms for complex coupled healthcare systems demands prior data strategy optimisation and decision making on collecting and assessment of training data. In other words, the training data strategy should come before or simultaneously with the development of the algorithm. This is a particular risk concern because a new AI algorithm will lack such data input and must be tested in constructed scenarios.

Dr Petar Radanliev
University of Oxford

Read the Original

This page is a summary of: Advancing the cybersecurity of the healthcare system with self-optimising and self-adaptative artificial intelligence (part 2), Health and Technology, August 2022, Springer Science + Business Media,
DOI: 10.1007/s12553-022-00691-6.
You can read the full text:




The following have contributed to this page