What should AI in government look like?


By Peter Waters and Ethan Huang*
Thursday, 10 June, 2021


What should AI in government look like?

As governments continue to release guidance for organisations encouraging the ethical use of AI systems, it begs the question of how governments themselves are implementing such practices.

Government decision-making often has a significant impact on individuals and organisations, and it is especially important that principles of accountability, transparency and other principles typically linked with AI ethics are upheld when AI systems are deployed by governments.

Locally, in the Robodebt saga, we have seen the potential risk of harm eventuate where such principles are not exactly followed.

In response to a review by the Committee on Standards in Public Life recommending clearer guidance on the ethical use of AI in the public sector, the UK Government has released its seven-point Ethics, Transparency and Accountability Framework for Automated Decision-Making.

The framework follows the call by the UK Information Commissioner’s Office for views on its AI and data protection risk mitigation and management toolkit in April of this year (as covered in our previous article ‘Here’s a great toolkit for Artificial Intelligence (AI) governance within your organisation’) and presents yet another practical piece to solving the AI ethics puzzle.

The framework

Although the framework is targeted at government departments, all can learn from the guidance it provides, which appears to be general in nature and can be readily adapted to the context of private organisations. The key components of the framework are as follows:

Automated decision-making (ADM) is defined to include: solely automated decisions (with no human involvement or judgement) often used for repetitive and routine decision-making; and automated assisted decision-making (where humans make the final decision), which is usually more complex and likely to have a significant impact on individuals.

Article 22 of the GDPR applies, which provides a right for individuals not to be subject to solely automated decision-making that has a legal or similar effect unless it is necessary for entering or performing a contract, is authorised by law or is with the data subject’s explicit consent.

Threshold questions — before deploying ADM, the framework recommends first asking:

  1. Is using ADM appropriate in the specific context?
  2. Have algorithmic risks been taken into account and is the policy intent/outcome best achieved through ADM? Consider conducting a thorough risk assessment.

Guidance on responsible use of data and third parties — the UK government refers to its Data Ethics Framework providing principles guiding the design and use of data in the public sector. The government recommends assessing the quality, representativeness and limitations of datasets, any potential for bias and the need for human oversight/intervention in using such data. The government also suggests early engagement to ensure the framework is embedded into commercial arrangements with third parties.

Seven-point framework

Practical steps and resources are also provided in the framework in relation to each of the below seven points:

  1. Test to avoid any unintended outcomes or consequences:
    Prototype and test your algorithm or system so that it is fully understood, robust and sustainable, and that it delivers the intended policy outcomes (and unintended consequences are identified).
  2. Deliver fair services for all of our users and citizens:
    Involve a multidisciplinary and diverse team in the development of the algorithm or system to spot and counter prejudices, bias and discrimination.
  3. Be clear who is responsible:
    Work on the assumption that every significant automated decision should be agreed by a minister and all major processes and services, subject to automation consideration, should have a senior owner.
  4. Handle data safely and protect citizens’ interests:
    Ensure that the algorithm or system adequately protects and handles data safely, and is fully compliant with Data Protection legislation.
  5. Help users and citizens understand how it impacts them:
    Work on the basis of a ‘presumption of publication’ for all algorithms that enable automated decision-making, notifying citizens when a process or service has automated decision-making with plain English explanations (all exceptions to that rule agreed with government legal advisors before ministerial authorisation).
  6. Ensure that you are compliant with the law:
    Ensure that your algorithm or system adheres to the necessary legislation and has full legal sign-off from relevant government legal advisors.
  7. Build something that is futureproof:
    Continuously monitor the algorithm or system, institute formal review points (recommended at least quarterly) and end user challenge to ensure it delivers the intended outcomes and mitigates against unintended consequences that may develop over time (referring to points 1 to 6 throughout).

The framework is released in tandem with an ‘Ethics, Transparency and Accountability Risk Potential Assessment Form’ document that is designed to help teams assess the possible risk of an automated or algorithmic decision.

*Peter Waters, Consultant and Ethan Huang, Graduate, Gilbert + Tobin.

Image credit: ©stock.adobe.com/au/Gorodenkoff

Originally published here.

Related Articles

Automated decision-making systems: ensuring transparency

Ensuring transparency is essential in government decision-making when using AI and automated...

Interview: Ryan van Leent, SAP Global Public Services

In our annual Leaders in Technology series, we ask the experts what the year ahead holds. Today...

AI in health care: the burning question that will only be answered with time

We are at an exciting juncture in our global healthcare journey, and AI’s arrival and...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd