There were 221 press releases posted in the last 24 hours and 394,305 in the last 365 days.

AI in the EU: copyright ownership and liability dilemmas

Artificial intelligence (AI) is a strategic technology that presents many benefits, but certain of its applications carry particular, serious concerns for the application of numerous EU regulations intended to uphold safety, safeguard basic rights and to attribute liability.

Copyright ownership of AI-generated content

When considering the potential copyright ownership of content produced by generative AI models like ChatGPT, the initial query revolves around whether such content qualifies for copyright protection. Within the framework of EU law, copyright hinges upon the originality and fixation of a work in a discernible form, as the Court of Justice of the European Union defined in Cofemel – Sociedade de Vestuário SA v G-Star Raw CV. Copyright is contingent upon the creation’s status as the author’s unique intellectual endeavor and its manifestation in a tangible, recognisable form. This criterion underscores the essence of originality within copyright law, implying that mere mechanistic reproduction without creative input would not suffice to trigger copyright protection.

Prompt-based generative AI brings a new angle to copyright law discussions, different from past cases like Infopaq International A/S v Danske Dagblades Forening (Case C-5/08). Prompt-based generative AI is different from computer-generated work. But, there’s no clear law about how a computer-generated work can meet the EU rule that a work must be the “author’s own intellectual creation”. This means we need more legal understanding about this. The absence of pertinent case law further complicates the determination of whether such AI-generated content meets the threshold for copyright protection.

In simple terms, this requirement means that a person has to use their brain power and creativity to make something for it to be protected by copyright. But with prompt-based AI, it’s a mix of human input and machine learning. So, figuring out who really owns the creation isn’t as clear-cut. This new tech blurs the line between what’s made by a person and what’s made by a machine, which creates a challenge for applying old copyright rules to AI-generated content.

The next question is about who owns the copyright in these AI-created works. This could include the creator of the AI ​​model, the person using it, or both. It can be assumed that the developer of the AI might be seen as the main copyright holder. But the rules set by platforms also matter because they might say who gets the copyright. For instance, ChatGPT’s terms of service claim to give the user ownership of any copyright in the output. However, these rules differ between platforms and are subject to change.

Potential intellectual property risks posed by generative AI models: legal liability for developers

There are essentially two ways through which an AI model might encroach upon IP rights: during its training phase and subsequently in its generated output. Generative AI models like GPT, known as “large language models” (LLMs), undergo training on extensive datasets, ideally comprising high-quality text to enhance the model’s comprehension of language patterns. Notably, the quality of the training data significantly impacts the model’s performance.

Moreover, under EU legislation, developers utilising training datasets containing databases must be cautious of potential infringement on underlying rights. If the dataset incorporates databases protected by copyright, actions such as copying, it could constitute infringement. Similarly, if the dataset falls under the sui generis database right, extracting or re-utilising a significant portion of its contents could also lead to legal consequences. These provisions highlight the importance of understanding and adhering to intellectual property laws when utilising datasets for training purposes within the EU jurisdiction.

The EU Digital Single Market Directive (2019/790/EU) mandates Member States to establish text and data mining (TDM) exceptions, primarily for research institutions and cultural heritage organisations. These exceptions permit lawful access for scientific research purposes but are subject to conditions, including explicit reservations by rights holders.

Furthermore, proposed amendments to the EU AI Act necessitate disclosure of copyright-protected training datasets for foundation models (like LLMs), a significant consideration for copyright owners concerned about potential unauthorised use of their works in AI training.

Regarding copyright infringement, the Information Society Directive (2001/29/EC) grants rights holders exclusive reproduction rights for their works. Reproducing copyrighted works, even within datasets scraped from the internet, may constitute infringement unless exceptions apply. However, the opaque nature of AI models often conceals specific copyrighted works within their datasets, making it challenging for rights holders to identify or prove infringement.

While rights holders cannot compel AI developers to disclose training data, proposed amendments to the EU Artificial Intelligence Act may mandate such disclosure for certain AI models, particularly those generating images.

Although the EU Digital Single Market Directive includes TDM exceptions, not all member states have implemented it, with variations existing even where it has been adopted.

The EU AI Act

On 13 March, 2024, the European Parliament plenary session officially approved the EU AI Act during its first reading.

The EU AI Act introduces a comprehensive regulatory framework aimed at governing the development, deployment, and use of artificial intelligence (AI) systems within the European Union (EU).  Among its provisions, Article 53 outlines responsibilities for providers of General Purpose AI (GPAI) models. These responsibilities include creating and regularly updating technical documents about how the model was trained and tested, following EU copyright laws, and sharing a clear summary of the data used to train the GPAI. 

The law applies to various parties involved in the AI industry. This includes companies that introduce AI systems or GPAI models to the EU market, regardless of where they’re based. It also covers users of AI systems within the EU and even companies from outside the EU if their AI technology is used in the EU. By encompassing both providers and users of AI technology, the EU AI Act aims to ensure a harmonised approach to AI governance while promoting transparency, accountability, and compliance with established legal frameworks, such as copyright law.

The European AI Office

The European Union’s AI Act aims to build trust in AI. While many AI applications offer significant societal benefits, certain systems carry risks that necessitate regulatory oversight. To this end, the European AI Office, inaugurated in February 2024 under the Commission’s purview, assumes the pivotal role of overseeing enforcement and implementation of the AI Act across member states. Tasked with fostering an environment where AI respects human dignity, rights, and trust, the Office facilitates collaboration, innovation, and research within the AI landscape while engaging in international dialogues to align global AI governance standards. Positioned as the hub of AI expertise within the European Commission, the AI Office spearheads efforts to cultivate trustworthy AI frameworks, safeguarding against potential risks, and positioning Europe as a frontrunner in the ethical and sustainable advancement of AI technologies.