Dave Fanning’s AI defamation case is at a new frontier of litigation

Posted on: 26 January 2024

The Achilles’ heel of generative AI is its pervasive tendency to spoof. This is giving rise to mind-bending legal issues - along with complaints from authors and artists, writes Deirdre Ahern, School of Law, in a piece originally published in The Irish Times.

Dave Fanning’s AI defamation case is at a new frontier of litigation

A recent defamation claim initiated by broadcaster Dave Fanning is a sign we have reached what his lawyer called “the new frontier of libel law” in the age of Artificial Intelligence (AI).

An online news item mistakenly attached Fanning’s image to a story about a different, unnamed broadcaster who was on trial for sexual offences. This had no connection to Fanning.

His legal team suggested that an AI tool being used as an automated news content aggregator may have malfunctioned and been responsible for using his image in error.

This case, a first in Ireland, joins other pioneering cases cropping up around the globe concerning generative AI. Mind-bending legal issues surrounding defamation and copyright are emerging. Generative AI tools (such as Open AI’s ChatGPT and Google’s Bard), which supply text and image content, notoriously suffer from glitches leading to misstatements known as “hallucinations”. These have spawned a series of defamation claims for associated reputational damage. Since ChatGPT only launched in late 2022, this is all very new legal territory.

Generative AI models are trained on large data sets and artfully respond to being prompted by text to provide well-crafted text or images on a given subject. However, the Achilles’ heel of generative AI is its pervasive tendency to “hallucinate” or plausibly spoof, offering up entirely fabricated information and images while generating authoritative-sounding content.

When hallucination occurs, the misinformation may present on a continuum from a minor, humorous gaffe to a reputation-crushing lie. Emerging cases make for alarming reading. In the United States, ChatGPT, quoting a fake newspaper article, erroneously stated that a law professor had sexually harassed a student. The AI chatbot claimed this had taken place while he was a faculty member of a university at which he had never taught, during a class trip to Alaska that had never taken place. Concerned that his reputation had been smeared, the professor took to Twitter to set the record straight. In Australia, an elected mayor contemplated suing OpenAI for defamation after members of the public told him that ChatGPT was claiming that he had spent time in prison for bribery. In fact, he had been entirely innocent; he was actually a whistleblower who uncovered international bribery associated with an Australian bank subsidiary. At a minimum, those affected want ChatGPT corrected to remove the false claims.

The first Generative AI defamation lawsuit was taken against OpenAI last June in the US. It was initiated by a radio presenter after ChatGPT was used by a journalist doing research on a federal case. ChatGPT incorrectly named the radio presenter as the chief financial officer of a foundation that was the subject of a real lawsuit – thereby falsely implicating him in embezzlement and misappropriation of funds. This vigorously contested case in the state of Georgia has recently been cleared for substantive hearing.

Tackling important liability issues such as what publication and defamation mean in the context of a generative AI tool, it constitutes a significant test case for the emerging industry.

Generative AI is an amazing breakthrough technology. However, given cases like these, the bigger question is about responsible innovation. How much leeway should be afforded to companies to allow developing technology models to be made available for use on an unrestricted basis when they have technical limitations that can cause harm to individuals’ reputation?

Last June Sam Altman, CEO of OpenAI, estimated that it would take a year and a half to two years for the company to get a handle on ChatGPT’s hallucination problem. This matters hugely as the deployment of generative AI is being ramped up and integrated into internet search engines now.

In part, the accuracy of generative AI outputs relies on large language models (LLM) being trained on huge amounts of high-quality data. However, artists and authors are arguing that feeding in their copyright art, photographic images and literature without their consent to train generative AI models is unfair and constitutes an unrecompensed breach of copyright. This is understandably a heated issue. Well-known authors such as Margaret Atwood and Dan Brown have taken up the mantle.

The counterargument is that training AI should qualify as one of the public interest exceptions to copyright. In an Irish and EU context, a credible argument can be advanced that generative AI companies can rely on a relatively new copyright exception. The EU directive on copyright in the digital single market makes exception for text and data mining for commercial purposes. This exception reflects technological advances but, in a classic trade-off, is balanced by an opt-out which allows authors to prevent use of their content, including presumably in AI training. A lot rides on the application of these provisions in the specific context of how generative AI operates. Further clarification is awaited in the wording of the EU AI Act.

There is no quick or easy fix for AI-related frontier legal and policy issues. Disruptive innovation, as exemplified by generative AI, forces the law to grapple with contexts never contemplated. Yet it can also be a catalyst for the law to grow and adapt.

A balancing act between facilitating innovation and proportionate principles for responsible innovation will take time. In the interim, courts must work capably within the bounds of the existing law. For tech companies, being an early mover in a new market allows them to gain competitive advantage. However, those who shape the field must also be prepared to field the risk.

Deirdre Ahern is a professor at Trinity College School of Law, where she is director of the Technologies, Law and Society research group. This piece was originally published in The Irish Times on Jan 25th, 2024. Read the original article on the Irish Times website here.