Can AI Ever Satisfy the Professional Duty of Independent Legal Judgement?
Sanjhay Vijayakumar
AI’s domination has gradually increased over time and is now being incorporated into the legal profession. There are numerous AI tools like Harvey AI that have been deployed in this profession for efficiency purposes when performing research and communication to clients. This would eventually take over the jobs that had belonged to junior lawyers with over 44% of legal work soon to be automated according to a Goldman Sachs report.¹ Nevertheless, there is awareness that artificial intelligence has the potential to transform how lawyers work and how their service will be delivered to clients.
It is important to recognise that whilst AI can replicate certain aspects of human decision-making, it does so without accountability. It is unable to act as a human lawyer in exercising its own professional judgement, expertise and integrity.² So, we must ask ourselves the question as to whether even an advanced AI is enough to perform a legal judgement whilst following principles ethically. We will explore this in the article.
Defining ‘independent legal judgement’ and how AI falls short of this requirement.
Let us start by defining ‘independent legal judgement’. This is a type of legal reasoning that lawyers apply to each of their clients' situations in conjunction with commercial and moral considerations. The nuance of the word, “independence”, is needed here, such that there is no external influence from social or commercial factors. We can categorise ‘independence’ under three sub-groups: analytical independence, and moral and institutional independence.
Analytical independence is where one must apply their years of expertise and experience. Moral independence is where one must understand the ethical consequences and societal impacts of a judgement. Lastly, institutional independence is where one must remain free from clutches of clients that are not their own.³ So although AI and humans can come to the same judgement, it is the path that they take to reach that destination which matters. An example would be the reasoning and weighing of ethics and social constraints that humans do when coming to a decision that AI does not do.
How does algorithmic bias challenge independent legal judgement?
The question to consider is whether lawyers should use AI as their own ‘independent’ analysis even if the AI uses an algorithm that potentially perpetuates algorithmic bias. Algorithmic bias is defined as the unfair discrimination that could occur to favour certain types of cases due to the way the algorithm was trained and designed. Algorithmic bias could violate a condition mentioned in Solicitors Regulation Authority (SRA) 3.2 which is that a service must be provided to the client according to the best of their ability.²
An example of algorithmic bias could be giving bail amounts that may affect marginalized communities more. There is no guarantee that a lawyer cannot possess this bias as well, but it does raise a fascinating question as to who should bear the mark of the judgement: the AI for making the decision or the lawyer that used AI? According to the SRA, the responsibility always lies with the lawyer rather than the AI company.²
Moreover, automation bias, which is the over-relying on an output from the AI, and overreliance on AI would not be a problem if a lawyer can scrutinize the quality of AI’s independent legal judgement.⁵ This means that even if algorithmic bias remains in an AI’s judgement, it should not remain a problem if the lawyer recognises it.
How LLMs work, and how they lead to only ‘approximate’ outputs.
It is important to understand that AI can only approximate decisions.⁴ Humans are like general AIs. These are AIs that are able to use reasoning and cognitive intelligence to perform tasks and are aware of certain contextual information like the situation of a client. Contrasingly, a narrow AI, which is what would be used in creating a legal judgement, is a type of AI that uses pattern predictions to perform a specific set of tasks without any reasoning.
A narrow AI would use a LLM, otherwise known as a large language model. LLMs are trained on large amounts of legal documents to be able to recognize patterns in language just like a narrow AI. They approximate outputs based on past examples in training data such as previous court cases. One could argue that the reasoning patterns exist in this training data but in the end, this LLM is still using pattern mimicry rather than its conscious thinking.⁴ It cannot appreciate the nuances of the specific case at hand. This is what gives us the ‘approximate outputs’.
The blackbox problem within LLMs, and how this causes distrust in the profession.
AI can perform the same output that a human lawyer would make after manually intensive processes. However, should this justify it satisfying the professional duty of creating a judgment? We must consider that AI makes use of LLMs, mentioned previously, and that these are known as black boxes which means that one cannot see the source code or internal logic of the code. This can lead to certain fear and conspiracy in society.⁶ In turn, it compromises what really is needed to write a judgment and that is the act of justifying and reasoning.
This is why regions like the EU classify AI in judgement writing as dangerous.⁷ What about if we choose an XAI which is a type of AI that allows humans to understand its decision by using a white box rather than a black one as an attempt to increase transparency? Although it is able to instil confidence in showing reasoning as well as adhering to certain regulations like the GDPR, it lacks the inherent human trait of reflecting on itself. And one could argue that when we substitute AI in the place of a lawyer, this can violate trust, integrity and confidentiality.²
Conclusion
As we reach the end, let us engage with the question directly of whether AI can ever satisfy the duty of independent legal judgements. While artificial intelligence has undoubtedly revolutionalised the legal sector by improving efficiency and accessibility, its limitations in exercising independent legal remain evident. The ability of AI to demonstrate analytical, moral and institutional independence means that it cannot fulfil the professional and ethical duties that are expected of a human lawyer. Issues such as algorithmic bias, automation bias, and the opacity of the black-box models further challenge the trust and accountability that underpin legal practice. Even with the advancements of XAI, the lack of conscious reasoning and moral reflection prevents AI from truly replicating the nuanced decision-making process required in law. Henceforth, while AI can assist in streamlining legal work, the ultimate responsibility as well as the duty to act with integrity, independence and sound judgement should always remain with the human lawyer.
Certain regulations, like the Formal Opinion 498 in U.S. ABA which allows lawyers to use AI only if it understands its capability and can still exercise independent judgement over the outputs of the AI,⁸ could involve the use of AI through the correct means. Accordingly, I would like to introduce the concept of augmented judgement which is the “combination of trusted human counsel and structured speed of AI”.⁹ It proposes that AI should be a decision-support tool that can enhance speed and research-depth; this is much better than a machine that substitutes reasoning. In conclusion, I think AI will be used by lawyers for writing ‘independent legal judgements’ but only for the purpose of aiding rather than replacing the lawyer.
Nigam S, Makhani AK, Miller R. Is AI finally going to take our jobs? Meeting client AI/technological demands while supporting junior lawyers' development. Int Bar Assoc. 2024 Nov 29
Solicitors Regulation Authority. SRA Principles 2019 [Internet]. Birmingham (UK): SRA; 2019
Coe Smith C, Vaughan S. Independence, Representation and Risk: An Empirical Exploration of the Management of Client Relationships by Large Law Firms [Internet]. Birmingham (UK): Solicitors Regulation Authority
Felin T, Holweg M. Theory Is All You Need: AI, Human Cognition, and Causal Reasoning. Strategy Sci. 2024
Kolkman D, Bex F, Narayan N, van der Put M. Justitia ex machina: The impact of an AI system on legal decision-making and discretionary authority. Big Data Soc. 2024
Burrell J. How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data Soc. 2016;3(1)
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Off J Eur Union. 2024 Jul 12;L 2024/1689:1–229.
American Bar Association. Formal Opinion 498: Virtual Practice [Internet]. Chicago: American Bar Association; 202
Rose M, Lyall A. Augmented Judgment, Accelerated Execution: AI's Role in Crisis, Issues and Risk Management. FleishmanHillard.
International Bar Association. The Future is Now: Artificial Intelligence and the Legal Profession. International Bar Association; 2024 Sep 9