Unlocking AI's potential: navigating the tightrope between innovation and trust

Cognizant Australia

By Sanmeet Bhatia, Vice President Asia Pacific & Japan Public Sector and Health, Cognizant
Wednesday, 01 April, 2026


Unlocking AI's potential: navigating the tightrope between innovation and trust

Australia's National AI Plan is one of the most ambitious technology policy commitments this country has made.

But ambition without foundations is just a policy document, and right now, delivery is being pulled in two directions. In seeking to capture the economic upside of AI, the Plan calls for both rapid innovation and safe, responsible regulation. The logic is sound, but the execution is more complex.

That complexity is now becoming explicit. The government’s recently announced five-part framework for AI infrastructure sets expectations spanning national interest, clean energy, water sustainability, local jobs and sovereign capability. This elevates AI from a technology agenda to an infrastructure, economic and public trust priority.

For agencies, this sharpens an already difficult task. They are being asked to accelerate AI adoption while safeguarding privacy, compliance and public trust — unlocking access to data while simultaneously tightening control over it.

Having worked alongside government leaders across Australia, New Zealand and the broader Asia–Pacific region, the pattern is consistent. The challenge is not willingness, it is readiness.

The Plan assumes a level of data readiness that, in many agencies, does not exist yet, with fragmented legacy systems, siloed datasets and governance frameworks built for a pre-AI world.

However, these are not excuses: they are the real starting point. The National AI Plan will succeed or fail on one question: whether leaders treat it as a technology project or a transformation mandate.

Those are fundamentally different challenges. One has a vendor and a go-live date. The other requires building the foundations to make AI sustainable and a different kind of leadership entirely.

AI has turned legacy modernisation into a mandate

The key to AI is data: high-quality, accessible, connected and well-governed data.

AI use in the public sector has jumped from 58% to 70% in the last year, yet 72% of government workers report struggling with disconnected databases, up from 56% in 2024. This gap is where execution risk sits.

In practice, the organisations that get AI right are not those moving fastest, but those that have invested early in data architecture, integrated systems and governance. The ones that skipped that step are now doing expensive rework to revisit foundational decisions.

This is the shift required for agencies. Data and technology modernisation cannot sit alongside the AI agenda as a parallel program: it is the agenda.

In practical terms this means breaking down data silos, aligning data to mission outcomes and building the capability to manage data as a strategic, living asset.

Governance is an enabler, not a constraint

The word ‘governance’ makes some leaders nervous. It shouldn’t.

In my experience, the organisations that have succeeded with AI adoption are not the ones that moved fastest with the loosest guardrails. Instead, they were the ones that built strong governance frameworks from the start; not bolted on at the end. For example, when working with a major aged care provider to move from AI pilot to full production in a regulated environment, progress was unlocked not by easing oversight but by designing governance into every stage of rollout.

Strong governance doesn’t slow progress: it removes the uncertainty that actually slows progress. It creates conditions for safe and confident AI adoption, rather than ad hoc experimentation.

This principle underpins the Commonwealth’s policy for the responsible use of AI in government, which sets out expectations for accountability, transparency and ethical risk frameworks. It keeps humans in the loop for critical decisions. These are not bureaucratic add-ons: they are the foundations of public trust.

Governance must also be as dynamic as the technology it governs. Static rules written today will be obsolete within 18 months. Agencies should build continuous review into their governance models so they stay fit for purpose as AI capabilities evolve.

The question to ask is not just ‘Are we compliant?’ It is ‘Are we learning, improving and strengthening trust?’

Cultivating a people-first AI culture

This is a transformation mandate, not a technology project, and that distinction matters.

Embracing AI safely and effectively requires leadership to champion change and not just commission it. Leaders must set a clear AI vision, fostering experimentation and cross-functional collaboration within the right guardrails.

The biggest lift is not the CIO or the innovation team, it’s the people who make decisions every day: case managers, clinicians, policy advisers, frontline service workers. If they do not understand AI well enough to use it responsibly and question it when it seems wrong, no governance framework in the world will protect you.

This means investing in data literacy across the workforce: not just technical training, but the judgment to know when to trust an AI output and when to override it.

Even when the technology is ready and the governance is sound, the hardest work is building the confidence of the frontline team, helping them understand that AI is there to support their judgment, not replace it. When that clicks, adoption will follow quickly.

In practice, this means reframing modernisation as a people-centred journey. Leadership, talent development and operational trust are just as important as the technology program.

Turning intent into action

To move from policy to practice, agencies must shift from high-level intent to real-world adoption. That requires four things to move together: strong data foundations, dynamic governance frameworks, investment in workforce capability and leadership commitment.

None of these can wait for the others to be perfect; they will iterate, learn and adjust as they go.

Trust is not conferred by policy alone. It is earned through consistent, observable practices that reassure both the public and stakeholders that AI is being used responsibly and ethically.

The National AI Plan gives Australian government a genuine opportunity to lead. The foundations exist, and the intent is clear. The question now is whether leaders across the public sector will treat this as the transformation mandate it actually is and build accordingly.

Image credit: iStock.com/Natee127

Related Articles

The next frontier: network sustainability is now a governance opportunity

Australian government agencies have built robust sustainability governance and established...

What Australia's new public sector Chief AI Officers need to get right

Every Australian Government department and agency must appoint a Chief AI Officer (CAIO) by July,...

Agentic AI and data sovereignty: considerations for Australia's public sector future

Australia's public sector should embrace agentic AI to enhance citizen outcomes, but it must...


  • All content Copyright © 2026 Westwick-Farrow Pty Ltd