What is it about?

Pondering the patient rights protection under the AI Act. The AI Act is based on, and at the same time aims to protect fundamental rights , implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. In order to ensure safety and, in parallel, the respect of fundamental rights, the AI Act stipulates the division of AI systems into risk classes. Based on the risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. The absence of specific requirements linked to minimal risk AI fails to offer protection from possible safety issues, de facto failing to protect fundamental rights when it comes to this class of AI. Therefore, despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited. In the health sector, the AI Act turns out to be a very elaborate and complex blanket that leaves the feet uncovered, since its safety requirements apply to a small proportion of the AI systems to which the legal framework is applicable.

Featured Image

Read the Original

This page is a summary of: A Blanket That Leaves the Feet Cold: Exploring the AI Act Safety Framework for Medical AI, European Journal of Health Law, February 2023, Brill,
DOI: 10.1163/15718093-bja10104.
You can read the full text:

Read

Contributors

The following have contributed to this page