Agentic AI and data sovereignty: considerations for Australia's public sector future

Teradata Australia Pty Ltd

By Simon Williams, Federal Government Lead, Teradata
Wednesday, 10 December, 2025


Agentic AI and data sovereignty: considerations for Australia's public sector future

Australia’s public sector is at a pivotal moment. With productivity at its lowest in 60 years and digital transformation accelerating across federal and state agencies, artificial intelligence presents both extraordinary opportunities and complex challenges. The emergence of agentic AI — autonomous systems capable of making decisions and executing complex tasks — offers unprecedented potential to enhance service delivery, policy execution, and operational efficiency. But with these capabilities come serious questions about data sovereignty, national security, and data governance.

Using generative AI to build a new computing paradigm with agentic systems

Agentic AI is an emerging capability that leverages the power of generative AI. These systems don’t just respond — they act. They can autonomously trigger workflows, coordinate across datasets, and support frontline decision-making, with, as appropriate to the use case, little to no human intervention. Technologies can bridge large language models with enterprise data warehouses, enabling natural language access to complex datasets for policy analysts, emergency responders, and service staff alike.

The opportunity

Agentic AI can transform government operations by providing 24/7 service delivery, processing complex policy scenarios in seconds rather than weeks, and identifying patterns across siloed datasets that human analysts might miss. It can handle routine inquiries, freeing skilled public servants for higher-value strategic work, and scale service delivery during crisis periods without linear cost increases.

The risk

Unlike deterministic systems that yield consistent answers, agentic AI operates in a non-deterministic realm, producing multiple plausible outputs that may vary with each query. This variability introduces significant challenges such as:

How does government audit decisions made by autonomous agents?

Who is accountable when an AI system denies a citizen’s request or prioritises emergency resources incorrectly?

Agentic systems can amplify biases present in training data, potentially creating systemic discrimination at scale. They are also vulnerable to model drift, adversarial attacks, and ‘hallucinations’ that could have serious consequences when applied to citizen services or national security decisions.

This shift demands new governance frameworks with deeper attention to transparency, explainability, human-in-the-loop protocols, and robust oversight mechanisms that don't yet exist in most agencies.

Data sovereignty considerations

Recent research shows 92% of organisations are concerned about data sovereignty, with 100% confirming that sovereignty risks — including service disruption — have forced them to reconsider data location strategies. For Australian government agencies, this is not theoretical: it’s a national security imperative. In the age of cloud computing, there is still a need for well-considered strategic segmentation of systems in support of data sovereignty.

Hyperscalers offer compelling advantages for government agencies: unmatched global infrastructure, massive investment in security R&D, compliance certifications, elastic scalability during demand spikes, and access to cutting-edge AI capabilities that can be prohibitively expensive to build in-house. Sovereign cloud offerings can provide data residency guarantees, with infrastructure physically located within national borders.

The strategic approach

Leading agencies are adopting hybrid architectures that leverage Hyperscalers for appropriate workloads — for example, development and testing environments, public-facing services with non-sensitive data, disaster recovery capabilities, and AI model training on de-identified datasets — while maintaining on-premise or sovereign cloud infrastructure for systems governed by stringent security requirements.

This isn't cloud repatriation, it's deployment optimisation. Agencies can harness cloud innovation for AI experimentation and non-critical workloads while ensuring ‘crown jewel’ data, including sensitive AI models and embeddings, remains under direct Australian control with clear jurisdictional authority.

Governance, security, and the accountability gap

Agentic AI's effectiveness comes with unprecedented complexity. Outputs may vary, decision paths can be opaque, and the speed of autonomous action can outpace human oversight. This creates an ‘accountability gap’ — when AI systems make consequential decisions, who bears responsibility for errors or harm?

Model selection, data quality, and deployment architecture are critical. Government has developed guidance, but individual agencies must act to ensure systems are explainable, secure, and aligned with public expectations.

Critical governance challenges include:

  • The black box problem: Many advanced AI models function as ‘black boxes’, making decisions through pathways that even their creators struggle to explain. For government services, this is unacceptable. Citizens have the right to understand the decision-making process.
  • Bias at scale: Agentic AI can process millions of decisions daily. If underlying models contain biases related to, for example, socioeconomic status, location, Indigenous status, or language proficiency, discriminatory outcomes could be replicated at unprecedented scale before anyone notices.
  • Model reliability: Agentic systems may confidently provide incorrect information (‘hallucinations’) or make decisions based on outdated data. In government contexts that could include healthcare eligibility, visa processing or welfare payments, errors that could have devastating human consequences.
  • Security vulnerabilities: Autonomous agents with access to multiple government systems present attractive targets for sophisticated cyber-attacks. Adversaries could potentially manipulate agent behaviour, extract sensitive data, or disrupt critical services.
The path forward

Agencies must adopt robust governance frameworks that mandate human oversight for high-stakes decisions, implement rigorous bias testing protocols, establish clear audit trails for all autonomous actions, and create ‘circuit breakers’ that pause AI operations when anomalies are detected. Model selection, data quality controls, and deployment architecture in support of appropriate data sovereignty requirements are all critical considerations.

Practical applications: transformative potential with necessary guardrails

Disaster response illustrates both agentic AI's promise and the need for careful implementation. As an example, during bushfire season, AI agents could monitor weather patterns, traffic movements, emergency services capacity, and infrastructure status, integrating data from the Bureau of Meteorology, state emergency services, hospital networks, and transport authorities to generate real-time operational recommendations.

The upside

Coordinators could receive automated analysis in seconds rather than hours, identifying vulnerable populations requiring evacuation, optimising resource deployment to areas of greatest need, and triggering emergency payments to affected citizens automatically. This could save lives and reduce response times dramatically.

The necessary caution

What happens when AI recommendations conflict with local knowledge? When weather predictions prove inaccurate? Effective disaster response requires human judgment, contextual understanding, and human relationship management that AI cannot replicate.

The most successful implementations will treat agentic AI as a decision support tool that augments human expertise rather than replacing it — providing rapid data synthesis and scenario modelling while preserving human authority over operational decisions, particularly those involving human safety or significant resource commitments.

Conclusion: Building a digitally sovereign, strategically hybrid future

Australia’s public sector should embrace agentic AI to enhance citizen outcomes, but it must do so with clear understanding of both the opportunities and risks. Government must balance innovation and data security, embracing intelligent decision-making that matches workloads to appropriate infrastructure, implementing robust governance frameworks, and maintaining sovereign control over truly sensitive systems.

Strategic imperatives for government agencies include:

  • Developing hybrid cloud architectures that leverage Hyperscaler capabilities for appropriate workloads while maintaining sovereign infrastructure for systems governed by stringent security requirements
  • Implementing comprehensive AI governance frameworks with mandatory human oversight for high-stakes decisions
  • Investing in workforce capabilities to manage, oversee, and interrogate AI systems effectively
  • Building robust testing and monitoring protocols to detect bias, drift, and failures before they impact citizens.

The choice isn’t between progress and protection — it's about intelligent implementation that harnesses transformative technologies while preserving the trust, accountability, and sovereign control that underpin effective governance.

By investing in hybrid sovereign AI infrastructure and comprehensive governance frameworks now, Australian government organisations can lead the world in secure, ethical, and effective public sector innovation that serves citizens without compromising their rights or security.

Top image credit: iStock.com/imaginima

Related Articles

Avoiding common pitfalls in public sector AI adoption

Public sector AI investments often fail to deliver ROI due to a narrow focus on...

Waipā District Council delivers ratepayer value with SaaS ERP strategy

Waipā District Council is delivering meaningful efficiency gains by rethinking how finance...

Drive mission impact and cut costs with GenAI

The value of GenAI is in how it can be applied to improve efficiency in areas such as approval...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd