We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Human oversight key to fair use of AI in legal aid
Pic: Shutterstock

15 May 2024 opinion Print

Human oversight key to fair use of AI in legal aid

Everyone in Ireland enjoys the right to legal representation, especially in criminal matters, write lawyers Leo Twiggs and James Egleston.

The general scheme of the Criminal Justice (Legal Aid) Bill 2023 aims to modernise criminal legal aid by transferring administrative responsibility from the Department of Justice to the Legal Aid Board.

It also introduces strengthened oversight and governance structures.

The bill introduces reforms, such as a simple and transparent application system for criminal legal aid, supported by a statement of financial circumstances.

Income assessment

Under the bill, the courts will continue to grant legal aid, but may impose a condition that its granting is subject to a further income assessment by the Legal Aid Board.

With the legal-aid costs mounting, and a drive for cost-effective justice administration, the efficiencies promised by artificial intelligence will be enticing to administrators seeking financial savings.

Artificial Intelligence (AI) systems could possibly provide cost savings in legal-aid provision by automating document analysis and assisting in legal research.

However, these efficiencies come with risks, especially if AI is used to make decisions that affect individual rights without human oversight.

If the AI system supports, rather than replaces, human judgment, it may not be considered high-risk under the AI act.

Nonetheless, any AI system significantly influencing legal outcomes or essential-service provision requires scrutiny to ensure it does not undermine justice

High-risk classification

As the European Union (EU) advances its proposed Artificial Intelligence Act, the provision of legal aid using AI warrants requires thoughtful analysis.

The potential application of AI hinges on the potential impact of AI on individual rights, the efficiency of legal services, and the necessary safeguards to ensure fairness and accountability.

The AI Act classifies AI systems as high-risk based on their intended use, and their potential impact on individual rights and safety.

Specifically, AI applications in sectors, such as:

  • Essential services,
  • Law enforcement, and
  • Administration justice are scrutinised closely.

Legal-aid provision, particularly where AI influences decision-making affecting individual rights or outcomes, would fall under the high-risk category.

This is particularly relevant if AI were used to streamline decision-making on whether an applicant qualified for legal aid.

Safeguards for high-risk AI systems

The AI Act mandates several stringent safeguards to mitigate risks and protect public interests for AI systems deemed to be high-risk. 

High-risk AI systems must have a risk-management system in place, including continuous identification and analysis of risks throughout the lifecycle.

High-risk systems must adhere to strict data-governance and management practices:

  • Ensuring quality, accuracy, and integrity of data sets,
  • Implementing measures to mitigate biases in training data, and
  • Secure data storage and handling to protect against unauthorised access and data breaches.

Comprehensive documentation is required to ensure transparency and accountability, including detailed technical documentation describing design, development, and deployment processes.

Users and affected individuals must be provided with clear and understandable information about the AI system, including:

  • Instructions for use (purpose and intended functionality),
  • Information on limitations and potential risks, and
  • Disclosure that they are interacting with an AI system when applicable.

High-risk AI systems must undergo conformity assessments to verify compliance with the EU AI Act requirements, possibly even third-party conformity assessments.

Continuous post-implementation monitoring is essential to ensure the AI system remains compliant and safe, by establishing mechanisms for the ongoing collection and analysis of performance data.

Corrective actions

High-risk AI systems must have protocols for reporting incidents and breaches, including immediate notification to relevant authorities, and implementation of corrective actions to mitigate incidents.

Strong governance frameworks must be established to ensure accountability by complying with ethical guidelines and establishing oversight bodies to monitor AI-system deployment and use.

Most importantly in the legal-aid context, human-oversight mechanisms must be implemented to monitor and intervene in the operation of high-risk AI, with clear protocols for human intervention to prevent or mitigate harm.

Protecting individuals and society

These safeguards collectively aim to ensure that high-risk AI systems are developed and used in a manner that prioritises safety, transparency, and accountability, protecting individuals and society from potential harms associated with AI technologies.

These measures are essential to balance the efficiency gains from AI with the fundamental rights of legal-aid applicants.

Leo Twiggs and James Egleston
Leo Twiggs and James Egleston
Leo Twiggs is a US-qualified attorney and policy advisor at the Law Society, with a focus on the digital divide and access to justice. James Egleston LLB, MA (NUI) is a former Law Reform Commission and Department of Justice legal researcher. He now works as a policy development executive in the Law Society

Copyright © 2024 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.