Government must introduce redress for AI harms: expert


By Dylan Bushell-Embling
Monday, 22 January, 2024

Government must introduce redress for AI harms: expert

Experts from Flinders University are urging the Australian Government to introduce mechanisms designed to both safeguard the public against risks posed by AI and pursue redress in the event of harm.

According to Flinders University Centre for Social Impact Project Lead Peter McDonald, low-income, socially disadvantaged, First Nations and other marginalised groups of Australians are at grave risk of being permanently or seriously affected by the rise of algorithm-driven AI systems.

While McDonald welcomed the government’s announced plans to legislate in cases involving high-risk AI applications such as driverless cars and surgery, he said many aspects of how Australia’s legislation can safeguard all of its citizens have yet to be properly assessed and addressed.

“The meteoric rise of AI and automated decision-making has both positive and negative impacts on the lives of Australians, as highlighted in the aftermath of the infamous Robodebt case which wrecked lives,” he said. “Datasets used to inform AI can be flawed and lead to disproportionate impacts on vulnerable groups. Previous research has found that AI would embed unfair practices into the hiring, insurance and renting sectors, and even education.”

To help manage these risks, the government needs to both regulate AI and provide a method of redress, McDonald argued.

“The benefits of this new transformative technology need to be used for social good. Our legislation, particularly in discrimination law, needs firm amendments to safeguard against these risks. We also should consider redress mechanisms that are accessible, independent and written into future legislation, consistent with the direction of other jurisdictions — notably the EU,” he said.

The Centre for Social Impact and Uniting Communities are jointly engaged in a research project aimed at exploring the increasing impact of data, AI and automated decision-making on isolated and disadvantaged people.

The Data for Good project is due to report later this year. It has offered six recommendations for consideration aimed at safeguarding marginalised people and communities from the effects of AI.

Image credit: iStock.com/alashi

Related News

Department of Agriculture modernises TAMS system

The Department of Agriculture, Fisheries and Forestry worked with Avanade to develop a revamped...

SA Water revamping spend management platform

SA Water is deploying a spend management platform powered by Invalua to support its operations...

DataStax enters GenAI collaboration with Microsoft

Generative AI company DataStax has announced the integration of its Astra DB database product...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd