UK urged to tackle algorithmic bias
The UK’s Centre for Data Ethics and Innovation (CDEI) has recommended that the government introduce a mandatory transparency obligation for algorithms as part of a detailed review aimed at tackling the risks of algorithmic bias.
The report recommends that the government introduce such obligations on all public sector use of algorithms that have an impact on significant decisions affecting individuals.
It also recommends that government organisations actively use data to identify and mitigate bias, including by developing an understanding of the capabilities and limitations of algorithmic tools.
Another key recommendation is that the government issue guidance clarifying how the Equality Act applies to algorithmic decision-making, including guidance on the collection of data to measure bias.
A large-scale survey of UK citizens found that the majority of respondents were aware of the use of algorithms to support decision-making, but that respondents were more concerned that the outcome of decision-making is fair, rather than whether algorithms are used to inform these judgements.
The survey also identified support for the use of data such as age, ethnicity and sex in tackling algorithmic bias in recruitment.
Another conclusion of the review was the need for an ecosystem of industry standards and professional services to help organisations address algorithmic bias internationally, and suggests that the UK Government could take leadership in this area.
The centre has accordingly initiated a work program on AI assurance designed to identify what is needed to develop a strong AI accountability ecosystem in the UK.
“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases,” Centre for Data Ethics and Innovation board member Adrian Weller said.
“Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.”
The Information Commissioner’s Office has welcomed the findings of the report, noting that data protection law “requires fair and transparent uses of data in algorithms, gives people rights in relation to automated decision-making, and demands that the outcome from the use of algorithms does not result in unfair or discriminatory impacts”.
But a recent survey conducted for BCS, The Chartered Institute of IT found that 53% of UK adults have no faith in any organisation to use algorithms when making adjustments about them, so winning public trust could be a challenge.
Governments internationally are taking their own steps to tackle the ethical concerns around the use of algorithms by the public sector. In July, the New Zealand Government launched a world-first set of standards to guide the ethical use of algorithms by public sector agencies.
Data solutions not-for-profit is scaling for growth and says young autistic and neurodiverse...
Recent times have shown that engaging with digital government can be out of reach for those that...
The best approach to strengthening information governance is through the combination of data...