What is it about?
The new generation of AI raises questions about which laws should be enacted to reap its benefits while avoiding unnecessary harm. This paper explains which European laws already apply to ChatGPT and other generative AI models (e.g., data protection and non-discrimination law), and which laws are currently being debated. Importantly, it critiques the proposed European AI Act. The authors believe that the proposal imposes too heavy a burden on developers of AI Systems, particularly when the latter can be employed in low-risk scenarios. Another EU instrument the paper covers is the Digital Services Act. This law has been enacted to combat the spread of fake news and harmful content on the internet. The paper argues that the DSA does not apply to generative AI models – even though it should. Based on these observations the paper makes several policy proposals. The authors suggest that the developers and users of AI systems should have clear disclosure obligations. Conversely, risk management systems should be mandatory only if foundation models are used in high-risk use cases – and not for all models. The developers should, however, have specific data governance duties to mitigate discrimination. Finally, the Digital Services Act should also cover ChatGPT and other generative AI models.
Featured Image
Why is it important?
ChatGPT entered the market at a crucial point in time. The EU is about to finalize the AI Act and therefore affords the unique opportunity to enact a regulatory framework addressing the challenges created by generative AI. However, the proposal urgently needs to be updated and modified. This article makes concrete suggestions on how to regulate generative AI, in the EU and beyond.
Read the Original
This page is a summary of: Regulating ChatGPT and other Large Generative AI Models, June 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3593013.3594067.
You can read the full text:
Contributors
The following have contributed to this page