Publication

Artificial intelligence meets legal services

04/04/2023

Vitalii Meliankov

Senior Associate

Data Privacy,
IT,
Labour and Employment

Trial prompts for Midjourney and ChatGPT are long overdue, and fatigue from the omnipresent AI-generated content hopefully faded away. It is high time to catch your breath and think calmly about how to live side-by-side with impressively competent artificial intelligence. 

This publication will (1) assess the impact of artificial intelligence on legal tech and (2) outline how conservatives can compete with tech-savvy colleagues. 

I. Artificial Intelligence and legal services

ChatGPT is impressive but is not a replacement for lawyers. Ironically, technical limitations have nothing to do with that – legal issues are far more preventative.

The table below is an attempt to convince us that the limitations of the technology are not a compelling reason to disregard artificial intelligence.

Soulless Machine 
(Makes mistakes and is just a no-no)
Human Lawyer 
(Flawless, impeccable, perfect) 
No emotion and empathy. The machine is devoid of emotions and, therefore, will not be able to handle a request humanely and with understanding. Call your lawyer on Sunday night and ask for your pet's name. The answer will be instantaneous. 
Errors. Multiple tests prove that artificial intelligence still makes mistakes.  A human loses the physical ability to make mistakes upon getting a degree in law. 
Limited knowledge. Artificial intelligence only uses previously processed information. A lawyer's brain is an ephemeral autonomous wisdom forge limited only by the rule of law. 


Underestimating artificial intelligence due to technical flaws is the same as telling a child that they will never become a doctor because the child is only 9 years old. Moreover, referenced arguments used to combat AI work poorly, even without a mocking comparison with human capability. AI tries to interpret human emotions just fine. Movie reference goes just in time:  

 

Prompt "Rewrite as a lawyer speech on defendant" followed by a diary entry from Taxi Driver

Subsequently, there is no reason not to believe that emotions cannot be simulated by a machine. Nevertheless, while being a good start, simply bragging about good movies does not make one a good lawyer.

II. Artificial Intelligence and reputation & liability

Developers of demo and research products (ChatGPT, Midjourney) are rarely liable for the use of generated content. There are no indemnities or warranties. Nor are there any reputational risks, as the product is not designed to replace an expert. Naturally, this suggests that one should not expect professional help from demo chatbots right away. 
For example, the OpenAI Usage Policies disallow the use of covered technologies (GPT, DALL-E) in offering tailored legal advice without a lawyer reviewing the information. On top of this, the OpenAI Terms of Use and Service Terms repeatedly emphasize that no compensation for damages is to be expected from technology. 

Such statements are unlikely to give confidence in an AI-generated lawsuit. 

In a meantime, lawyer risks their own reputation or even finances. Even if AI capability is superior, a bird in the hand (brought by your human lawyer) is worth two in the bush (promised by your shiny AI counsel).

III. Artificial Intelligence and legal tech

Drag-n-drop used to be the pinnacle of no-code. Yet with help of AI, it is enough to simply give short text prompts to the machine. This is thanks to neural networks, which can be used as follows:

  1. Assistance in product development. If you are eager to develop your own application, but have no idea how – a dialogue with ChatGPT can be a starting point. I can confirm that ChatGPT offers helpful advice on how to generate newsletters backed by Python, whereas Midjourney helps users design a worthy logo. GitHub Copilot allows users to make sure that everything is ok with the code. 
  2. Integration into the product. GPT language model can be integrated into product for users to enjoy. Other language models, such as Google's LaMDA, are underway. For example, one can design an AI-backed application generating complaints to local authorities regarding ecology or other relevant issues based on a brief user input. 

Just a small disclaimer before you drop this article to proceed with your unicorn project: whatever the use of artificial intelligence is, terms of service must be followed (e.g., regarding intellectual property; or notifications that certain content has been generated by AI). 

Despite the easier content creation, risk of market oversaturation is low. Technologies facilitate development, but often discourage users from being creative and daring. Though convenient, MS Excel/Google Sheets formulas are not used by everyone to do calculations. Another example is Richard Stallman's fight for free software and the level of its support — the ability to modify the source code of a purchased product is intriguing, but not that necessary for most users.

Summing Up 

To be human. Applications focused on specialized services (medicine, law, finance) are unlikely to become fully autonomous in the near future. This is solely due to reputation and liability issues. In view of this, it is worth emphasizing the diminishing list of advantages of a human over a machine:

  • innovators who are ready to build their own apps should emphasize their responsibility even if a service is provided by AI. Pre-checking of results, compensation for damages, and proficient support may come in handy;
  • conservatives should consider the pros and cons of AI, just as with a human competitor. An increased focus should be placed on the reputation. Task management must be rethought to provide a conscious balance of quality and speed that can compete with machine-generated services. There is no need to abandon your clients. Another analogy underway: car makes you travel faster yet walking still has some convenient use cases.

The mentioned “near future” will come when artificial intelligence will acquire legal capacity or when the world will be ready to face an AI-controlled company. 

The ocean is still blue. Big tech companies are more interested in creating models to be used by more industry-focused 3rd party apps. With such a business model, the profits are decent, the risks are low, and the popularity of the technology grows with each dependent startup. Although artificial intelligence is yet another tool to facilitate creation, one should not expect an influx of new legal tech products.

Author: Vitalii Meliankov
 

Prompt "Rewrite as a lawyer speech on defendant" followed by a diary entry from Taxi Driver

What's new?

Most important updates in your mail.

similar publications

30/06/2023

Digital services may be rendered and consumed remotely, providing resiliency in the direst of times. With this in mind, Ukraine aims to develop an attractive legal framework.

Vladyslav Ivanov, Vitalii Meliankov

27/12/2022

Why is GDPR-like reform expected? Considering that Ukraine aims (1) to focus on the IT industry and engage foreign investors, and (2) to join the European Union, taking the GDPR as a reference for privacy reform is a sensible move. The GDPR is a regulation that most companies aim to comply with – setting mirrored requirements is business-friendly. On top of that, the GDPR outperforms current Ukrainian legislation in terms of data subject protection.

Liliia Lavrichenko, Vitalii Meliankov