What is it about?

This article investigates the risk of ‘technological singularity’ from artificial intelligence. The investigation constructs multiple risk forecasts that are synthesised in a new approach for counteracting risks from artificial intelligence (AI) itself.

Featured Image

Why is it important?

The article forecasts emerging cyber-risks from the integration of AI in cybersecurity. The new methodology is focused on addressing the risk of AI attacks, as well as to forecast the value of AI in defence and in the prevention of AI rogue devices acting independently. Not all systems can be secured, and totally securing a system is not feasible. We need to construct algorithms that will enable systems to continue operating even when parts of the system have been compromised.


The presumptions in this article are based on the concept that any future ‘superintelligence’ would have intelligence much greater than the most intelligent human minds. This leaves very limited strategic options, but one that does remain available at present is for humanity to continue doing what it has done for preventing global threats historically, and that is to form coalitions. With intelligence comes the ability for decision making, and as we can witness from human intelligence, two intelligent human beings can have two completely opposite perceptions of the world. It is likely that a future artificial ‘superintelligence’ would face similar decision-making challenges. If this presumption proves correct, then the mitigation strategy for a ‘technological singularity’ is the ability to form coalitions with the like-minded artificial ‘superintelligence’.

Dr Petar Radanliev
University of Oxford

Read the Original

This page is a summary of: Super-forecasting the ‘technological singularity’ risks from artificial intelligence, Evolving Systems, June 2022, Springer Science + Business Media,
DOI: 10.1007/s12530-022-09431-7.
You can read the full text:



The following have contributed to this page