Australia’s National AI Plan: what government leaders need to know
By Brandon Voight, Fellow, Future Government Institute and Director of Public Sector Solutions at OpenText
Wednesday, 17 December, 2025
Australia’s long‑awaited National AI Plan, published on 2 December 2025, sets a whole‑of‑economy blueprint to capture AI’s opportunities, spread the benefits across communities, and keep Australians safe. The plan’s three pillars — ‘Capture the opportunities’, ‘Spread the benefits’, and ‘Keep Australians safe’ — aim to align investment, skills, public service transformation and guardrails in a pragmatic, Australian way.
A bold intent anchored by three goals
The National AI Plan frames AI as a national capability and a public‑good lever. In brief: build smart infrastructure and domestic capability, attract investment; scale adoption and skills; and uphold safety through proportionate regulation and international engagement.
A concise, ‘plan‑on‑a‑page’ summary highlights immediate next steps the Commonwealth will pursue, including developing data centre principles, expanding access to AI and digital skills, implementing the APS AI Action Plan, ensuring workplace protections remain fit for purpose, and establishing an Australian AI Safety Institute (AISI) to advise on risks and gaps. The plan also underscored that Australia is already a leading investment destination for data centres and that between 2023 and 2025 planned investments announced could scale above $100 billion, with a focus on compute and connectivity.
Beyond the pages of the Plan, the government separately confirmed the AISI will become operational in early 2026, joining the International Network of AI Safety Institutes and providing technical capability to monitor, test and share information on emerging AI technologies, risks and harms. External coverage from Australian tech media reflects broad support for the Institute as an ‘excellent move’, while still urging clarity on leadership and mandate.
Regulation: Australia’s principles‑first stance, with privacy reform now on the books
One of the most debated choices in the Plan is regulatory posture. For now, Australia is leaning on “strong existing, largely technology‑neutral legal frameworks” while it builds capability (including the AISI) and targets high‑risk settings with proportionate guardrails. Reporting from ABC News characterises this as a shift from previously proposed ‘mandatory guardrails’ in a standalone AI Act toward leveraging existing regimes in the short term. In parallel, industry coverage throughout 2025 suggested the government was moving away from a dedicated AI Act, favouring lighter rules grounded in privacy and consumer protections — another signpost of the current Australian approach.
Crucially, however, privacy law has already changed. The Privacy and Other Legislation Amendment Act 2024 (Cth) — reforms to the Privacy Act 1988 (Cth) — commenced through 2025 and introduced a new statutory tort for serious invasions of privacy (in force from 10 June 2025), stronger data‑security obligations, enhanced enforcement powers for the OAIC, transparency around automated decision making, and foundations for a children’s online privacy code. Legal and civil society explainers further break down the reforms, including transparency over automated decisions and anti‑doxxing offences.
For government readers, the signal is clear: AI deployment in agencies and supplier systems must satisfy tightened privacy expectations now, even as broader AI‑specific legislation remains under review through the AISI and ongoing ‘Safe and responsible AI’ workstreams.
State and local readiness: policies, assurance and procurement nudges
While the national plan provides federal direction, states and councils are developing their own guardrails and procurement levers. For example, in NSW draft council policies reference the NSW AI Assurance Framework, aiming to embed clear accountability and oversight for AI use in the public sector. On the infrastructure side, Commonwealth procurement settings now require government workloads to be hosted in data centres that meet stringent energy‑efficiency baselines — notably a minimum 5‑star NABERS Energy rating — as part of the Net Zero in Government Operations strategy, pushing market behaviour beyond a vendor‑by‑vendor conversation. Tech and legal media through mid‑2025 reported rising compliance expectations across the data centre sector and flagged emerging AEMC grid‑connection reforms for ‘mega loads’.
The infrastructure reality: AI needs power, fibre and water — and sustainable choices
The Plan’s first pillar — ‘capture the opportunities’ — puts smart infrastructure front and centre: compute, connectivity, and data centre capacity. The government’s narrative emphasises Australia’s attractiveness: stable operating environment, legal protections, land, renewable potential, chips access and subsea cable connectivity — but also flags that demand is surging.
That surge has sustainability consequences. A new CEFC–Baringa report (December 2025) estimates Australia’s operational data centre capacity could reach 2.2–3.2 GW by 2035 (up from around 0.3 GW in 2024–25), representing 8–11% of national electricity consumption, with much of the growth concentrated in Sydney and Melbourne. The same modelling warns that without additional renewables and storage, wholesale prices could spike (NSW up 26% and Victoria up 23% by 2035) and NEM emissions could rise by around 14%. Conversely, adding around 3.2 GW of renewables and around 1.9 GW of battery storage by 2035 can contain price rises and neutralise added emissions.
AEMO‑commissioned analysis during the 2025 planning cycle similarly flagged data centre demand as material to grid planning scenarios. Energy reporting this year also notes underlying electricity demand has turned a corner after 15 years of flatlining, with AI‑driven data centre loads among key contributors to the upswing.
The market is moving accordingly. OpenAI and NEXTDC’s proposed $7 billion AI campus in Western Sydney signalled long‑term renewable PPAs and ‘next‑generation’ cooling designs that avoid potable water — an emblem of the decarbonisation pressures accompanying hyperscale growth. Global coverage reinforces the structural trend: data centre energy demand could nearly triple by 2035 and AI workloads will push utilisation higher, amplifying the need for grid modernisation and smart on‑site solutions.
The bottom line for public agencies is that the National AI Plan sees data centre expansion as a national capability issue. This means whole‑of‑government coordination across planning approvals, grid capacity, renewable build‑out and workforce pipelines, because AI adoption increasingly depends on these physical constraints.
Climate policy intersects with AI: the Safeguard Mechanism and net‑zero operations
Large facilities with direct emissions above 100,000 tCO2‑e fall under the Safeguard Mechanism, which sets declining baselines (generally around 4.9% per year to 2030) to help meet national climate targets. While data centres themselves are usually heavy electricity consumers rather than large direct emitters, operators with significant stationary energy or industrial emissions may be captured, and all government buyers face growing expectations on Scope 2 and supply‑chain emissions reporting.
In its first reformed year, 219 facilities were covered and aggregate emissions fell modestly (to around 136 MtCO2‑e), but the big shift was removing headroom and pricing emissions for most facilities, nudging abatement decisions and credit purchases. For public sector tech leaders, the intersection is practical: meeting NABERS thresholds for federal workloads, sourcing credible renewables, and planning load growth in ways that do not increase system‑wide emissions or costs.
Skills, adoption and the public service lens
The Plan’s second pillar — ‘spread the benefits’ — centres on workforce capability, SME adoption support, and service transformation. In practice that means expanding AI and digital skills access, scaling sector‑specific adoption programs (e.g. via NAIC), and implementing the APS AI Action Plan so agencies get better at using AI responsibly and effectively. The launch announcement frames these measures as part of the Future Made in Australia agenda and notes $460 million already committed to AI and related initiatives, with additional support for research, skills and commercialisation.
For CIOs, the immediate task is operationalising responsible AI: procurement clauses that require demonstrable testing and assurance, privacy‑by‑design under the updated Privacy Act settings, and human‑in‑the‑loop for high‑impact decisions. The government’s interim response to the Safe and Responsible AI consultation points to risk‑based guardrails in high‑risk contexts (health, justice, transport), using existing law where possible and considering mandatory obligations (testing, transparency, accountability) for high‑risk deployments. Sector groups tracking the response highlight at least 10 legislative frameworks that may need tuning — another reason to build repeatable assurance workflows now.
What the National AI Plan means for Australian public‑sector leaders
1. Design AI programs around infrastructure realities
AI adoption is now constrained by compute, fibre, grid capacity, and sustainable energy. When you plan AI at scale — computer vision in transport, models for health triage, automated transcription across justice — you must co‑plan with energy agencies and data centre providers.
Use NABERS procurement thresholds and credible PPAs; avoid potable water cooling where feasible; and expect grid‑connection reforms to affect timelines for large loads.
2. Treat privacy reform as live and automate compliance
The privacy tort, automated decision transparency, stronger OAIC enforcement and upcoming children’s code change your risk calculus.
Catalogue AI use cases; embed DPIAs/PIAs; monitor ADM; and be able to explain, log and contest decisions affecting citizens.
3. Build risk‑based assurance now and anticipate AISI guidance later
The AISI will increase technical scrutiny of advanced systems. Agencies should develop testing and assurance routines (red‑team evaluations, bias/robustness testing, incident reporting) and contracting clauses that require comparable practices from suppliers.
Expect clarity to improve in 2026, but don’t wait; follow the interim response and sector guidance on high‑risk settings.
4. Connect AI adoption to climate policy and price signals
Even if your facility isn’t a Safeguard Mechanism entity, your energy choices have system‑wide impacts.
Work with central agencies on renewables and storage procurement strategies, and quantify avoided emissions and price stabilisation when you make infrastructure decisions.
5. Invest in workforce and public service capability
Leverage NAIC programs, expand AI literacy, and focus on AI that augments workers rather than replacing them.
Tie skills programs to the APS AI Action Plan and sector‑specific needs (health, emergency management and education).
The sustainability scorecard to watch in 2026
We can expect three threads to dominate early implementation:
- Data centre principles and procurement: translating NABERS and energy efficiency requirements into consistent federal standards for ‘AI‑ready’ hosting, with transparent reporting on energy, emissions, water and resilience.
- Grid coordination and renewables: AEMO/AEMC rule changes and state planning decisions aligning renewable build‑out, storage, and connection capacity with the AI/data‑centre pipeline.
-
Safety science and assurance: the AISI’s early publications on testing methods, incident reporting, and evaluation standards — particularly for frontier and general‑purpose models used in government.
Why sovereign AI is becoming a strategic priority in Australia
The emerging need for sovereign AI in Australia should be a wake-up call for public...
AI innovation must be national priority to secure Australia’s AI future
This is a defining moment for Australia to act and invest with an infrastructure-first approach,...
Sovereign AI for Australia: defining and delivering our future
The question for government is how to ensure AI strengthens our autonomy, protects our citizens,...
