You are currently viewing Searching for the owner: The raging copyright issues in generative AI work

Searching for the owner: The raging copyright issues in generative AI work


Generative Artificial Intelligence has leaped into the spotlight with an explosion of available tools to create text, images, video, music, and other media. 

The only thing coming close to matching the collective wonder at these technologies are concerns about their impacts. There are many questions on how some of the core concepts of Intellectual Property will be affected and how companies can adapt their strategies to avoid pitfalls. 

Copyright is a specific concern both for companies using this exciting technology and those whose property may be appropriated.

Large amounts of content are used to train the models used by AI and the current set of applications does not indicate the sources used to derive content they output.

Japan appears to have taken the position that training an AI model from data is permissible regardless of the content or how it was obtained. Most other jurisdictions have not taken a stance on this yet, but there will need to be guidance on whether training a model on copyrighted material is in itself a copyright violation or can be considered Fair Use. 

Numerous lawsuits have already been filed in the US against AI art generators alleging it is a violation of copyright but none have been resolved yet. The legality of training AI models on copyrighted materials is being tested. If training on copyrighted data without permission is not allowed, it could potentially halt the advancement of some generative AI domains.

The ability of AI to generate content that resembles or replicates previous works makes it difficult to determine who the original creator was. This has been especially problematic with image generation, where an artist’s style can be mimicked by simply requesting their style in the prompt given to the generative AI. Whether or not these works are considered derived from other works is a complex question and comes down to how the transformative nature of AI platforms is interpreted.

There are many questions still unanswered about the rights and protections that AI-generated works have. 

India has granted copyright for AI-generated works, requiring a human to be a co-author and owner of the copyright. The United States Copyright Office does not currently recognise copyright for works not “created by a human being”. This may change through legislation or ongoing or future court cases.

If copyright protections for AI-generated content do become available in places where they are not now, it is not clear who would have the rights to the generated media. 

Generally, one would expect the end user of the tool to be the creator, much like when a photographer uses a camera. In this case, it can be argued that the ways in which the models are built have just as much influence over the output as the prompts from users. There is certainly an argument that the provider of the tool could retain rights, whether based on changes to the law or by adding it to a contract for the use of the application.

Generative AI systems create content based on their training models. It is possible they will output content that is an exact match to a work that already exists. More likely, it would create something similar enough to make it obvious from where the content was derived. In either case, an argument cannot be made that the content was created independently from a work that was used to train the model and so it is a derivative work.

There will be many new questions that come up as generative AI systems advance. Companies do not want to fall behind in using a technology that can revolutionise parts of every industry and large swaths of some. 

Many of these uses are for accomplishing tasks rather than creating content. But when building, using, or distributing content from generative AI systems, they will need to be aware of the current laws governing copyright in the jurisdictions in which they operate and how the laws are adapting. 

At the very least, everyone should understand what was used to train the AI models they are using and pay close attention to the EULAs (end-user license agreements) or contracts they agree to when using these tools.


Chris Hardee is the AVP of Technology at Lumenci



Source link

Leave a Reply