When you think of Christian Louboutin, what comes to mind are the brand's iconic, red-soled high heels, a trademark that has turned heads on catwalks and red carpets around the globe. But recently, Louboutin walked into a courtroom in India, where its famous red sole became the subject of a high-stakes trademark battle. And, as reported by The Fashion Law, while the case brought some 'sharp' clarity to the world of fashion trademarks, it also raised eyebrows in another sector: artificial intelligence in legal practice.

In the matter of Christian Louboutin SAS v. M/S The Shoe Boutique, Justice Prathiba M. Singh of the High Court of Delhi preliminarily ordered a competitor, M/S The Shoe Boutique, to refrain from selling shoes that too closely imitate Louboutin's signature red sole and spike patterns. However, the case also ventured into uncharted territory when Louboutin's counsel introduced evidence generated by ChatGPT, the generative AI platform, to bolster their arguments about the brand's "acquired distinctiveness."

The court's treatment of this AI-generated evidence is a wake-up call for legal professionals on the boundaries and limitations of emerging technologies like ChatGPT in the practice of law.

Louboutin's counsel submitted a ChatGPT response to the question of whether Louboutin is known for spiked men's shoes. ChatGPT – the free version, no less – responded, "Louboutin is known for their iconic red-soled shoes, including spiked styles for men and women." The legal team saw this as further proof of the brand's distinctiveness. The court, however, had a different perspective.

Justice Singh said that ChatGPT "cannot be the basis of adjudication of legal or factual issues in a court of law." The court cited concerns about the reliability, accuracy, and potential for bias in AI-generated data.

The message for lawyers is clear: While AI has made strides in various sectors, its role in the legal field is still nascent and fraught with challenges.

AI should be used in a supplementary role, not as your 'Star Witness': AI tools like ChatGPT can serve a supplementary role for preliminary research or understanding. They should not, however, be the cornerstone of your legal argument.

Input equals output: While ChatGPT can provide general information, it can also produce "incorrect responses, fictional case laws, and imaginative data," as noted by the court. This undermines its reliability as a source of evidence. Had Louboutin's lawyers understood how to train ChatGPT correctly so that the response had – either guided them to admissible research evidence or helped catalogue and package the research results succinctly – the result could have been an excellent illustration of competent use of legal tech.

Human verification is non-negotiable: Any data or insights derived from AI must be corroborated by traditional, reliable methods.

Know your tools: Before deploying any AI tools, understand their limitations. Incorrect use can not only undermine your case but also set a precedent that may hamper the wider adoption of such technologies in legal settings.

By no means an isolated incident, this cautionary tale from India echoes sentiments from courts in the United States, where judges have started requiring disclosures from attorneys about their use of AI in legal filings and research, emphasising the need for human verification.

Despite the dazzling promise of AI, the Louboutin case serves as a poignant reminder that technology, no matter how advanced, cannot replace human judgement and ethical considerations in the legal arena. Louboutin's red sole met AI's grey area in this important lesson for legal practitioners: tread carefully in the grey areas of AI or risk finding yourself at the 'sharp end' of precedent.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.