- Dnistrianskyi Center /
- Updates /
- Artificial intelligence in the Judicial Systems of Ukraine
Artificial intelligence in the Judicial Systems of Ukraine
by Ivan Horodyskyy
Summary
Artificial intelligence (AI) technologies face the problem of lack of legal and moral basis in their application in the modern world, and this is particularly relevant for the field of justice administration. AI application in this field may both open up new opportunities and lead to a crisis due to possible discrimination in the decision-making process, lack of trust in such decisions, crisis related to the understanding of the very notion of justice as such, etc.
Now, during the “digitalization” wave, application of AI technologies in justice administration is actively discussed in Ukraine, and the first steps related to their implementation are made. However, this process is rather closed, does not involve the public and experts, as well as due assessment of possible risks. This may lead both to mistakes made in the AI application in this field, and to mistakes in the judicial decisions, with further appeal of such decisions, and a still deeper crisis of trust in the judiciary, etc.
Due to this there arises the need to analyze the current vision of the Ukrainian authorities concerning the use of AI technologies in the judiciary, to assess the steps already made and to issue recommendations the state authorities, primarily, the Ministry of Justice, should take into account while implementing the state policy in this direction.
Such analysis proves availability of problems in the state digitalization policy, lack of understanding of the risks of AI technology application in the field of justice administration, as well as possible use of this idea for political purposes. Therefore, the need arises to expand the range of the participants of the discussion and implementation of the strategies of AI technology application in the judiciary; work in this direction should be performed in the background of development of substantial plans and activities aimed at their implementation; quick implementation should be rejected in favour of the stage-by-stage one, with the required legal framework development; foreign experience of implementing such systems should be taken into consideration.
Historical insight into the problem
Digitalization of the state and state services after the change of the political authorities in 2019 has been one of the key directions of the governmental policy, and AI technology application is viewed as one of the tools used in this process. But the process is not transparent enough, and it does not ensure objective assessment of the results of AI technology application, which fact has been mentioned many a time by cybersecurity experts. In particular, Volodymyr Styran points out that: “The main (well-grounded) complaint of cybersecurity experts to the Ministry of Digital Development is lack of transparency of decisions passed. No adequate answer is provided to the question where security practices are”.
Therefore, there is some level of skepticism as far as the level of cybersecurity of the state digital infrastructure of Ukraine is concerned, and recent hacker attacks and personal data leakages confirm this fact. Besides that, lack of communication on the part of the authorities not just concerning the results, but concerning the details of development, implementation, and application of such digital tools deepens the level of distrust in the quality of such services.
The field of justice may well become one of the key fields for implementing AI technologies. This is caused by the crisis of trust in the executive power and the judiciary by the society, which now makes only some 10%. And digital technologies, including AI, are considered to be an effective way of solving these problems and improving performance of the judiciary: according to the UNESCO survey, 85% respondents-lawyers representing more than 100 countries have expressed their interest in the study of the opportunities provided by AI technology application in law, including in the judiciary.
Currently, only the first steps are made to introduce AI technologies in justice administration and criminal justice in Ukraine, this is just the initial stage. Thus, on September 15, 2020 the Ministry of Justice introduced the use of the “assessment of the risks of repeat criminal offences via the “CASSANDRA” subsystem” for the units that are subordinated to it. On February 10, 2021 the High Council of Justice suggested launching a pilot project using artificial intelligence on the basis of one of the trial courts. On December 21, 2021 the Minister of Justice Denys Maliuska determined as the direction of reform of the penitentiary service “Expansion of the opportunities of penitentiary system digitalization for the sake of ensuring convenient collection, use, and analysis of the penitentiary system data for adoption of well-grounded management decisions”. However, these initiatives may become of larger scale, in particular, due to the fact that the election period is coming.
Though application of AI technologies in justice administration does not have any large-scale effect in Ukraine yet, but in the future this may affect the content and nature of court decisions, the process of justice reform in Ukraine and the independence of the judiciary in general. Due to possible positioning of application of these technologies as a key tool for solving the problems of the Ukrainian judiciary, those initiatives may be used in the interests of the authorities for solving political tasks. In particular, in China AI technology application in justice faces criticism voiced by the professional community that considers it to be a way of enhancing control over the field of justice administration.
Currently, Ukraine does not have any policies regulating the strategy of AI technology introduction in justice that would presuppose prevention of the risks that may arise. The adopted documents to be considered by us further are mainly framework or declarative statements on the matter. There are no “roadmaps” and action plans for stage-by-stage implementation of the state policy in this field. Respective projects have not been approved as the result of public and multi-stakeholder discussions.
The key document for artificial intelligence development in Ukraine is the Concept of Artificial Intelligence Development in Ukraine, approved by the Cabinet of Ministers of Ukraine on December 2, 2020. It sets justice as the priority field for AI technology implementation, and the following tasks to be performed are determined there:
- further development of already available technologies in the field of justice (Unified Judiciary Information and Telecommunication System, e-court, Unified Register of Pre-Trial Investigations, etc.);
- implementation of AI-based consultative programs that will open access to legal consultations for wide strata of the society;
- prevention of socially dangerous phenomena via available data analysis using AI;
- determination of the necessary re-socialization measures for convicts via available data analysis using AI technologies;
- adoption of court decisions in minor cases (upon mutual consent of the parties) on the basis of the results of analysis of the status of compliance with legislation and court practice, made using AI technologies.
These objectives are general and presuppose limited introduction of AI technologies into the judiciary at this stage. The Action Plan for the Implementation of the Concept of Artificial Intelligence Development in Ukraine for 2021-2024 does not contain any steps regarding AI technology implementation specifically in the field of justice, the document in general is rather a framework one.
Current state policy problems
However, due to digitalization in the public field, the election process and in case of exacerbation of the crisis of trust in the judiciary, this issue may become more relevant, and measures aimed at its implementation – more active. The issue of justice digitalization may quickly become more relevant for political motives, and this is already taking place, with possible introduction of “e-election” the idea of which is actively promoted while the election is coming.
In general, it is the orientation of the state policy in the field of digitalization at its speed at the main criteria for the development and implementation of technological solutions that constitutes the key problem. Risk management in this context, as far as one may judge by public sources, is of a secondary nature for the authorities, and this fact is also stressed by cybersecurity experts, as it has already been mentioned before.
Therefore, current approach to AI technology implementation in the judiciary is characterized by the advantage that respective solutions may be developed and implemented quickly. However, lack of a comprehensive and clear approach may pose considerable risks in the social dimension, in particular, lead to the loss of independence of the judiciary, enhanced control over it by other branches of power, exacerbated crisis of trust in the judiciary, etc.
Communication of the Ministry of Digital Transformation concerning cybersecurity issues is highly illustrative in this respect. The Minister of Digital Transformation Mykhaylo Fedorov in his 2019 interview for the lb.ua edition told that “the role of cybersecurity is a bit exaggerated. It is spoken of a lot, but, in fact, not many people can mention any real cyber threat cases”. He illustrated this by the example of the Presidential Office the representatives of which in mid-2019 showed “dashboards with a thousand of attacks a day, site overloading, etc.”, and after their dismissal seemingly “nothing was happening for several months, while we were building a new team”[10]. And current public communication concerning possible data leakages from “Diia” is narrowed down to repeated statements that “Diia” itself does not store and does not process information.
A similar situation with absence of systemic assessment of risks and ways to prevent them may arise with AI technology implementation in the field of justice. The experience of similar technology implementation in the field of justice from the USA and Great Britain allows determining a number of challenges:
- the problem of discrimination in passing decisions via IA technologies: there will probably be a great number of biased decisions related to specific categories of people in relation to whom they are passed, discriminatory by social, ethnic characteristics, etc.;
- unsatisfactory level of technology security and protection against external interference: there is a need for independent audit and assessment of the technical and organizational capacity of the state to ensure the adequate level of cybersecurity in the application of these technologies;
- the quality of decisions on which such technologies will “learn”: the respective algorithms will be developed and “taught” on the basis of the already developed judicial practice which now causes the level of distrust in court we have already mentioned;
- the risks of further appealing of decisions passed using these technologies: with no adequate legal regulation of the application of these technologies, in particular, in the procedural legislation, and establishment of the correspondence of this application to the norms of the Constitution and international human rights standards, there are high chances of further cancellation of decisions passed by courts of other instances.
Foreign experience of AI application in the field of justice, in particular, analysis and criticism of the application of these technologies in the USA, is already known in Ukraine, in particular, the Minister of Justice Denys Maliuska has been referring to it. But the issue of risks and lessons that may be learned from this experience is not raised in the discussion of AI technology implementation in the field of justice in Ukraine.
AI technology implementation in the Ukrainian judiciary may evoke a number of additional problems. Most important, that may lead to the rejection of the course towards reset of the judiciary, with the current reform of the High Council of Justice and the High Qualifications Commission of Judges, in favour of a seemingly more radical and effective scenario. As the result, profound changes in the judiciary will not take place, while instead there will take place implementation of an allegedly high-profile decision that will disguise preservation of the current status quo in the interests of specific forces. This approach may be exemplified by the approval of the Law On President Impeachment in 2019 which, in spite of loud statements, did not facilitate the procedure at all.
Besides that, such implementation may require substantial resources, and a failure, which is highly possible due to imperfect nature of these technologies and low efficiency of the implementation of such approach, may only exacerbate the crisis of trust in the judiciary. And non-revival of trust in the judiciary may result in further consequences in the economic, political, and humanitarian fields, aggravating current problems related to ownership right protection, corruption and unavoidability of punishment, and, generally, loyalty to the principles of justice and rule of law.
Tentative solutions
It looks expedient to exclude the possibility of introducing AI into the judiciary and the system of criminal justice of Ukraine in this situation as a short-term task and to consider it as a mid-term and long-term prospect. That will enable to approach development and implementation of the respective technologies in a more substantial way, to prevent the risks that may possibly arise, as well as to relieve the possible tension related to AI introduction in the political election period of the 2022-2024.
That will enable not just to ensure the very fact of AI technology development and introduction in the judiciary, but also to secure AI application following legal and ethical standards. More perfect, clear and safe technologies are strategically more necessary for the field of justice than quick solutions that may only stress and aggravate available problems.
Since the field of justice is of critical importance for the society, that will also help to involve different groups of stakeholders in the process of IT technology development and implementation: the representatives of the judicial and other branches of power, academicians, representatives of the non-governmental sector, etc. That will ensure maintenance of the balance of interests in further application of those technologies in the judiciary and criminal justice as well as trust in them.
Such long-term approach is unlikely to ve applied now, since implementation of AI technologies in the judiciary is not in the focus of current political discussions. However, if the Government activates its work at the implementation of those technologies in the nearest future, that may cause political opposition and lead to postponement of their development and implementation for a longer period.
The Government, represented by the Ministry of Digital Transformation and the Ministry of Justice, has got the necessary resources to make better considered and deeper steps aimed at AI technology development and implementation in the judiciary. At the same time, their work tactics in this direction depends on the vision of the political leadership of the state as well as on the growing attention of the society to the problems of AI application in the field of justice.
Conclusions and recommendations
1. Implementation of the state policy of AI technology development and introduction in the judiciary must take place via involvement of a wide range of stakeholders – not only IT specialists, officials, judges, and representatives of other legal professions, but social scientists, philosophers, specialists in ethics and religion, etc. This approach will correspond to the international standards already available in the field, in particular, the European Ethical Charter on the use of artificial intelligence in judicial systems. That will enable to prevent a number of mistakes and risks as well as to raise the level of trust and legitimacy of new technologies in the society.
2. “Turbo mode” in AI technology implementation in the field of justice in Ukraine should be rejected, since the issues of justice and justice administration are particularly sensitive for the Ukrainian society. Speed cannot serve as the key indicator of the efficiency of the state policy in this respect, and the process itself should be accompanied by transparent and ongoing communication with the society.
3. Detailed “roadmaps” and actions plans should be developed for AI technology development and implementation in the judiciary, since strategic framework documents and by-laws are not enough. These documents should include stages of approval of those technologies on a temporary basis, accompanied with further analysis of the results and technology improvement, with no one-time implementation, with cost analysis and project performance indicators.
Also, additional research and analytical documents on the matter, prepared by non-governmental organizations, are required. They will enable to assess the attitude to IT technology implementation in the field of justice in the society, its expectations, as well as to determine priority directions of work with this goal in view.
4. Foreign experience of developing and implementing these technologies should be taken into account not just as statement of the fact that it is available, but in terms of prevention of the problems other countries faced (in particular, the USA and Great Britain). This approach will enable to develop model solutions that might serve as a sample to follow and be applied in other fields and states.