An artificial intelligence company based in London has won in a landmark judicial case that examined the lawfulness of AI models using vast quantities of protected data without permission.
Stability AI, whose directors includes Oscar-winning filmmaker James Cameron, successfully defended against claims from the photo agency that it had violated the international photo agency's copyright.
Legal experts view this ruling as a setback to copyright owners' sole right to benefit from their creative work, with one prominent attorney warning that it demonstrates "the UK's secondary IP system is not adequately robust to safeguard its artists."
Judicial documentation showed that the agency's images were indeed used to develop the company's system, which enables users to generate images through text instructions. However, Stability was also found to have infringed the agency's brand marks in some cases.
The presiding justice, Mrs Justice Joanna Smith, remarked that determining where to find the balance between the concerns of the creative industries and the AI industry was "of significant societal importance."
The photo agency had initially sued the AI company for violation of its intellectual property, alleging the AI firm was "completely unconcerned to what they fed into the training data" and had collected and copied millions of its photographs.
Nevertheless, the agency had to drop its initial IP case as there was insufficient evidence that the development occurred within the United Kingdom. Instead, it proceeded with its suit arguing that Stability was still employing copies of its visual assets within its systems, which it described the "lifeblood" of its business.
Highlighting the intricacy of AI copyright cases, the agency fundamentally argued that Stability's image-generation system, known as Stable Diffusion, amounted to an infringing copy because its creation would have represented IP infringement had it been conducted in the UK.
Mrs Justice Smith ruled: "A machine learning system such as Stable Diffusion which fails to retain or replicate any copyright material (and has never done) is not an 'violating reproduction'." She elected not to rule on the passing off allegation and found in favor of certain of Getty's claims about brand infringement involving watermarks.
Through a statement, Getty Images said: "We continue to be profoundly concerned that even financially capable organizations such as Getty Images face significant difficulties in safeguarding their artistic works given the absence of disclosure requirements. Our company committed millions of currency to achieve this stage with only a single company that we must continue to address in a different venue."
"We urge governments, including the UK, to establish more robust disclosure rules, which are essential to prevent costly legal battles and to enable creators to defend their rights."
Christian Dowell for Stability AI said: "We are satisfied with the court's ruling on the remaining allegations in this case. The agency's decision to willingly dismiss the majority of its copyright claims at the end of trial proceedings left only a limited number of allegations before the court, and this final ruling ultimately addresses the IP issues that were the central matter. We are thankful for the time and consideration the court has put forth to settle the significant questions in this case."
This judgment comes amid an ongoing discussion over how the current administration should legislate on the issue of intellectual property and AI, with artists and writers including several prominent figures advocating for enhanced protection. Meanwhile, technology companies are calling for broad access to protected content to allow them to develop the most advanced and effective AI creation platforms.
The government are presently seeking input on copyright and artificial intelligence and have stated: "Uncertainty over how our intellectual property framework functions is holding back growth for our AI and artistic industries. That must not persist."
Industry experts monitoring the situation suggest that regulators are examining whether to implement a "text and data mining exception" into British copyright legislation, which would allow protected works to be used to develop machine learning systems in the United Kingdom unless the owner chooses their content out of such training.