GenAI forces govts to rethink assurance and monitoring
While Gartner predicts over 60% of government artificial intelligence (AI) and data analytics investments will drive real-time operational decisions and outcomes by 2024 globally, Australian government organisations are slowly coming to grips with AI technologies.
Investments in AI are increasing to improve service delivery, drive automation and efficiency, and deliver better citizen experiences, but the rapid rise of generative AI has created a new challenge and requires a new mindset.
A recent Gartner survey showed that 70% of governments have deployed, or plan to deploy, generative AI in the next three years. Even though there is a lot of interest from governments across Australia, adoption of generative AI will remain problematic.
The Australian public has high expectations of government — including ethical unbiased behaviour, accountability, transparency and privacy. Since generative AI does not conform with traditional ideas of how technology adoption can be de-risked, new approaches will be needed. To have a real impact, new real-time data insights from generative AI must be operationalised into ways of working.
A good way to think about generative AI is as a well-read, articulate, but inexperienced intern who hasn’t signed a non-disclosure agreement. You wouldn’t give them immediate access to any sensitive information and you’d make sure a more experienced staff member reviews their output before it’s published or used externally.
Unfettered use of these platforms creates potential risks around disclosure, exposure of policy thinking, inadvertent bias and use of copyrighted content within government records or reports.
It’s why the Australian Government issued a discussion paper and a call for submissions earlier this year on how it can mitigate any potential risks of AI and support safe and responsible AI practices. The Digital Transformation Agency has also developed interim guidance on government use of generative AI platforms for staff within Commonwealth agencies.
Despite these challenges, the adoption of generative AI can open the door to innovation across government. Organisations need to explore the limits of the technology in a low-impact way, where the value delivered far exceeds the residual risk.
Used in the right way, the technology works well in drafting multiple forms of the same communications in different languages, or for different demographics. It can be used to summarise long or complex cases to support improved decision-making and provide guidance to customer service agents in responding to complex queries.
In human services, for example, generative AI can quickly generate effective case manager training scenarios that are based on real life and are rapidly updated as processes or policies change. In contact centres, generative AI could be used to draft personalised government responses to citizen questions, incorporating cultural considerations to adjust responses to ensure appropriate empathy is maintained.
Impact on society and government operations
While governments have been wrestling with what to do with generative AI internally, they have been more concerned about its overall impact on society. The Australian eSafety Commissioner, for example, published a Tech Trends Position Statement on Generative AI in August 2023, outlining the risks, harms and opportunities of generative AI, as well as government regulatory challenges and approaches.
It’s important for government organisations to not just consider their own use of generative AI, but also how the technology will be used by the communities they serve and the impact that could have on their operations.
Citizens might take it into their own hands to use generative AI to translate or simplify government communications, potentially resulting in misleading outcomes.
Realistic communications purporting to be from government or directed to government can be created at volume by generative AI applications, potentially overloading the administration in responding.
As an example, AI-generated deepfake videos and audio recordings of politicians were spreading on social media in an attempt to influence the outcome of the recent Slovakian elections.
Taking a new approach to adoption
Since generative AI is not engineered to give a specific result, traditional governance and assurance practices cannot be applied in the same way. Taking a structured approach towards understanding, using and implementing responsible AI practices is essential for government organisations leveraging the technology. Continuous vigilance is required as the need for oversight does not end when an AI service goes into production.
Every deployment of AI should be considered from a risk perspective to ensure that the appropriate level of oversight is implemented right from day one. If adequate oversight cannot be applied, the use case is not viable for government.
Given the importance of accountability in government, there should be clarity around who is responsible for deciding if the generative AI implementation is performing appropriately and what action should be taken when it falls outside the predefined boundaries of acceptable behaviour.
While considerable work has already been done on developing AI policy at a federal and state level, to date very little has been mandated for government departments and agencies to direct their oversight of the technology.
Ultimately, there are significant benefits to be gained from government use of generative AI — but before widespread adoption can occur, urgent government action is required on setting mandatory requirements, policies and frameworks around their use and monitoring.
We are at an exciting juncture in our global healthcare journey, and AI’s arrival and...
The ability to leverage advanced, data-driven solutions using AI and the cloud will empower the...
IT managers are at the coalface when it comes to disgruntled stakeholders. What happens when your...