- The use of artificial intelligence in law is already established, but is not very tangible for many users.
- The introduction of GPT-3 by OpenAI has revolutionized the understanding of AI in the legal field.
- ChatGPT has received both critical and positive feedback based on its performance.
- The quality of legal tasks remains questionable, despite the AI's impressive linguistic abilities.
- The generated language is often grammatically correct, while the content is not always concretely correct.
- Application examples and existing problems were presented at a Weblaw.ch event.
- A recording of the event is available online to provide further insights.
The use of artificial intelligence in law is not new, and there are several useful application areas. In practice, relatively little of this has been seen so far; AI in legal work has not been very tangible for broader user circles.
This has changed fundamentally with the release of the text generator GPT-3 (Generative Pretrained Transformer 3) from the company OpenAI. In particular, the ChatGPT application has attracted attention. There are already various short statements, which, as usual, range from “overcritical” to “over-euphoric”. In the legal environment, the focus is naturally on how well the application can perform legal tasks. Even if the database is already somewhat outdated (2021) and was not specifically trained on legal issues or data: The results are astonishing. Contentwise of course not always correct, but linguistically and grammatically the application is on the “height””. And here, of course, lies hidden a greater problem: It is argued stylistically and linguistically clean, in the form coherent, in the result not.
The day before yesterday I participated in the initial event of Weblaw.ch and was able to show about 180 listeners a few examples of use in the everyday life of lawyers, but also problems. In the meantime there is also the recording, which you can find below: