The UK’s Public Authority Algorithmic and Automated Decision-Making Systems Bill: key takeaways

October 16, 2024

By Marcus Evans and Salma Khatab

Lord Clement-Jones has introduced a Public Authority Algorithmic and Automated Decision-Making Systems Private Members' Bill (Bill) into the House of Lords. Currently at the second reading stage, the Bill addresses increasing reliance on AI and algorithmic systems by public authorities in the UK, and aims to mitigate the potential risks associated with their use. The Bill would establish significant new guidelines for the development, deployment, and monitoring of such systems. 

Although Private Members' Bills are rarely passed into law, they serve an important role in drawing attention to critical issues, encouraging debate, and potentially influencing future policy. The progress of a previous Private Members’ Bill dealing with AI (not limited to the public sector) ended when there was a change of government (see our blog, Artificial Intelligence (Regulation) Bill: UK Private Members’ Bill underscores wide-spread regulatory concerns).

What does the Bill apply to?

The Bill applies to any “algorithmic or automated decision-making system” (defined as “any technology that either assists or replaces the judgement of human decision-makers”) (AADM) developed or procured by a public authority.  This includes:

  • Any system, tool or statistical model used to inform, recommend or make an administrative decision about a service user or a group of service users.
  • Systems in development, excluding automated decision-making systems operating in test environments.

The Bill would not apply to AADMs:

  • Dealing with national security.
  • That merely calculate and implement formulas, including taxation and budgetary allocation, “insofar as they automate a process of calculation which would otherwise be carried out manually and fully understood.”

Objectives of the Bill

The Bill is designed to ensure that public authorities using AADMs adhere to stringent standards of transparency, fairness, and accountability. Its purpose is to:

  1. Mitigate risks to individuals: the Bill seeks to protect individuals from harm resulting from public sector AADMs, particularly focusing on the risks of bias, unfair outcomes, and lack of transparency.
  2. Establish an independent dispute resolution service: to ensure individuals have a clear avenue to challenge or seek redress for decisions made by public sector AADMs, the Bill proposes an independent dispute resolution mechanism.

Deploying a new AI system

If enacted, the Bill would impose several new obligations that public authorities must adhere to before deploying an AADM. These requirements are aimed at ensuring transparency and fairness in the decision-making processes of the public sector.

1. Impact assessments and bias testing

Public authorities would be required to conduct and publish an impact assessment prior to deploying an AADM. The assessment would evaluate the fairness of the system and ensure it complies with existing anti-discrimination laws. A mandatory bias assessment would be included as part of this process, ensuring that any systemic biases present in the AI model are identified and mitigated before it is put into use.

2. Algorithmic transparency records

Public authorities would also need to maintain and publish an algorithmic transparency record. This would provide detailed information on how the AADM operates, including the extent of human oversight in the automated decision-making process. Such transparency is intended to enable affected members of the public to better understand how decisions affecting them are made.

3. Logs

The Bill would mandate that AADMs used by public authorities must automatically generate logs of their decision-making processes. Such logs, which would need to be retained for at least five years, are for the purpose of enabling future scrutiny and accountability.  Certain exceptions to the requirement may apply.

Ongoing responsibilities after deployment

The Bill places several ongoing obligations on public authorities with respect to the operation and oversight of AI systems:

  • Public register of AI decisions: public authorities would be required to keep a publicly accessible register of decisions made either wholly or partially by AADMs, so as to facilitate transparency and enable public scrutiny of automated decision-making.
  • Explaining outcomes: public authorities would need to be able to explain the reasoning behind decisions made by AADMs, particularly when they have significant impacts on individuals.
  • Employee training and oversight: public sector employees involved in overseeing AI systems would be required to be given training to challenge automated decisions. This measure is aimed at ensuring that humans retain ultimate authority and are equipped to intervene when necessary, mitigating against so-called “automation basis” (uncritical reliance on AI outputs).  Schedule 1 of the Bill sets out various factors that public authorities must address in providing such training, including in relation to the design, function, and risks of the system, in order that their employees can review, explain and oversee the operation of an AADM.

AI systems unfit for scrutiny banned

The Bill would, significantly, prohibit the use of AADMs that cannot be adequately scrutinised. Public authorities would not be allowed to deploy opaque or “black box” AADMs whose internal decision-making processes cannot be properly understood, audited, or challenged. The provision is clearly aimed at trying to prevent the use of technology that could lead to unaccountable or discriminatory decision-making.

Independent dispute resolution service

The Bill emphasises the need for accessible independent dispute resolution. In the form of a service, it would provide a formal process through which individuals could challenge or seek redress for decisions made by AADMs, providing a path for addressing grievances outside redress in a court.

Our observations

The Bill reflects the growing use of AI in the public sector. By prioritising fairness, transparency, and accountability, it seeks to protect individuals from the potential harms of automated decision-making, while ensuring that AI is used responsibly by public authorities. If adopted it would be the public sector equivalent of the EU AI Act in England and Wales: although most of the detail is left to the Secretary of State to fill in by subsequent statutory instruments, it proposes a thorough AI governance framework and would likely be very influential as to how high risk AI systems would be governed across the UK.

The new government has hinted at regulation of AI (see our blog, AI regulation in the UK: Will the next government introduce AI legislation?) but it is not yet clear what the focus will be.  It is reasonable to assume that a significant focus will be foundation/large language models, and it is not yet clear what traction proposals to regulate the public sector’s use of AI specifically will have. However, this proposal compactly illustrates what a thorough AI governance process entails.

Businesses proposing to develop AI systems for the public sector in particular will wish to track the progress of the Bill, but in the current UK AI legislative vacuum any traction it gains will be significant for the private sector too.