Comparing NZ and Australia on AI: adoption-first versus guardrails-first
By Brandon Voight, Fellow, Future Government Institute and Director of Public Sector Solutions at OpenText
Thursday, 15 January, 2026
Artificial intelligence is now a front‑of‑mind priority for both Wellington and Canberra, but their national approaches diverge in emphasis and execution. New Zealand has moved to a light‑touch, OECD‑aligned adoption strategy coupled with practical guidance for the public service and business; Australia is advancing risk‑based guardrails, strengthening public‑sector policy, and scaling capability through national centres and standards. Understanding these contrasts matters for CIOs, vendors and trans‑Tasman organisations planning AI deployments across both markets.
New Zealand: enable adoption, uphold trust
In July 2025, the New Zealand Government released its first National AI Strategy, positioning Aotearoa as an ‘adopter nation’ that will accelerate private‑sector uptake while maintaining a principled approach grounded in the OECD AI Principles. The strategy is paired with Responsible AI guidance for businesses to give firms confidence to implement AI safely.
Inside government, the Government Chief Digital Officer (GCDO) launched the Public Service AI Framework and updated Responsible AI Guidance for the Public Service: GenAI in early 2025 — laying out lifecycle governance, security, privacy, human‑in‑the‑loop expectations, accessibility and transparent customer experience. The guidance explicitly builds on Cabinet’s risk‑based regulatory posture (leverage existing mechanisms, regulate proportionately where needed), and situates public service practice within the national strategy.
Crucially, New Zealand’s AI governance is anchored by the Algorithm Charter for Aotearoa (2020), a cross‑agency commitment to transparency, bias management, privacy and Treaty of Waitangi considerations, with human oversight and channels for appeal. The Charter remains a distinctive element of NZ’s public‑sector approach.
What this looks like in practice is that agencies are encouraged to publish plain‑English documentation of algorithmic decision‑support, consult affected communities, and embed Te Ao Māori perspectives in design — while the GCDO surveys use and publishes insights to build capability and trust.
The bottom line for NZ is a coherent adoption‑first agenda (national strategy + public‑service framework + business guidance) that privileges agility, transparency and proportionate risk management over prescriptive rule‑making.
Australia: guardrails, governance and capability
Australia’s recent trajectory is more guardrails‑first. In January 2024, the Department of Industry, Science and Resources (DISR) published the government’s interim response to the Safe and Responsible AI consultation, signalling testing, transparency and accountability in high‑risk settings, while enabling low‑risk uses to proceed largely unimpeded.
Within the Commonwealth, the Digital Transformation Agency issued the Policy for the Responsible Use of AI in Government (v1.1 from September 2024; strengthened as v2.0 effective 15 December 2025). It sets mandatory requirements for accountability, transparency statements, strategic AI adoption plans, internal registers of use cases, staff training, and risk‑based impact assessments. The Australian Public Service Commission further frames five pillars — regulatory clarity, best practice, capability, government as exemplar, and international engagement — while consulting on mandatory guardrails for high‑risk AI in late 2024.
Australia’s earlier AI Ethics Principles (2019) remain a touchstone for industry and government, now complemented by updated Guidance for AI Adoption (2025) from the National AI Centre — streamlining voluntary guardrails into six essential practices aligned to those principles. Capability‑wise, the National Artificial Intelligence Centre (NAIC) — established in CSIRO’s Data61 and now operating through DISR — continues to uplift trusted AI adoption across industry, including SMEs, with practical resources and programs. Australia has also domesticated ISO/IEC 23894 as AS ISO/IEC 23894:2023, signalling alignment with international AI risk‑management standards.
The bottom line for Australia is a guardrails‑and‑governance stack (interim response + APS policy + ethics principles + NAIC + standards) that tightens public‑sector practice and explores targeted obligations for high‑risk AI, even as economy‑wide legislation remains under development.
Compare and contrast: where they converge and diverge
- Regulatory posture: Both nations favour risk‑based approaches over sweeping AI‑specific statutes in the short term, but NZ tilts to ‘enable adoption’ through guidance and transparency, while Australia emphasises ‘prevent harms’ via guardrails and mandatory public‑sector requirements.
- Public sector use: NZ’s GCDO issues framework and GenAI guidance with a strong focus on service experience, accessibility, privacy and human oversight; Australia’s DTA requires agency‑level governance artefacts, accountability and registers. In short: NZ codifies how to use AI well; Australia mandates how agencies must govern AI use.
- Indigenous and community considerations: NZ embeds Treaty of Waitangi commitments and Te Ao Māori perspectives directly in the Algorithm Charter and GenAI guidance; Australia’s consultations consider impacts on First Nations and human rights in defining high‑risk settings but do not anchor them via an equivalent cross‑government charter.
- Standards and international alignment: Both align to OECD AI Principles; Australia additionally localises ISO/IEC 23894 as a national standard, while NZ operationalises OECD principles through public‑service guidance and the Charter.
- Business enablement: NZ pairs its national strategy with responsible AI guidance for business to spur adoption; Australia’s NAIC publishes Guidance for AI Adoption and convenes a Responsible AI Network for industry. Practically, both offer voluntary guidance; Australia supplements this with stronger public‑sector governance requirements.
Implications for CIOs and IT leaders
There are a number of implications for IT leaders; namely:
- Plan for dual compliance: In Australia, expect documented accountability, registers and impact assessments for public‑sector engagements; in NZ, be ready to demonstrate transparency, human oversight and Treaty‑aligned considerations in algorithmic deployments.
- Adopt common standards: Align your internal governance to OECD principles and ISO/IEC 23894 so practices port seamlessly across Tasman.
- Design for trust at the edge: NZ agencies will look for plain‑English documentation and community engagement; Australian agencies will look for clear accountability and testing in high‑risk contexts. Both reward end‑to‑end transparency and robust assurance.
The strategic takeaway
New Zealand’s adoption‑first program accelerates AI use by providing practical, proportionate guidance; Australia’s guardrails‑and‑governance track seeks to minimise harm while scaling capability. For organisations operating on both sides of the Tasman, harmonising risk management, transparency and human oversight will let you move quickly in NZ and satisfy evolving governance expectations in Australia, without building two incompatible compliance regimes.
From legacy to launchpad: how mainframe modernisation can accelerate government AI aspirations
By running AI directly on their mainframes, public agencies gain a secure, self-managed...
Australia’s National AI Plan: what government leaders need to know
The National AI Plan aims to align investment, skills, public service transformation and...
Why sovereign AI is becoming a strategic priority in Australia
The emerging need for sovereign AI in Australia should be a wake-up call for public...
