Explainability and Transparency Within Decision-Making Systems in the Legal Domain
Hajer Al Raisi
Increasingly powerful forms of artificial intelligence (AI) have the potential to enhance the quality and efficiency of legal services and specifically the services provided by lawyers. While the shortcomings of AI are well-documented, the research illustrates that delineating the spheres of machine-driven decision-making and design solutions may help minimise the perennial problems of explainability and transparency. The real challenge for policymakers and the legal profession is not, whether AI is better than humans but how can lawyers and legal professionals harness its potential.
With the UK’s long-established rule of law, there needs to be careful consideration when it comes to managing the challenges posed by AI technology.1 When it comes to decision-making in the legal domain, achieving explainable and transparent AI systems is an essential principle of justice. However, as AI systems become more advanced, we face more challenges in determining their inner workings and functioning methods. Therefore, the essay focuses on the rising challenges of explainability and transparency and highlights research that current AI systems are unable to meaningfully explain its decision-making process. Hence, although there is an expectation for AI systems to best or match human lawyers, the current progression of such technologies allows the legal domain to utilise AI as part of the decision-making process, rather than completely replacing them.