Innovation shouldn’t come at the cost of trust

F5

By Jason Baden,Regional VP at F5
Monday, 09 March, 2026


Innovation shouldn’t come at the cost of trust

AI governance is the new air traffic control for government IT.

Public agencies are entering a new phase of digital transformation where AI is not just operating fixed flight paths of traditional systems — they’re piloting fleets of adaptive, self-learning systems that make decisions mid-air. But the control towers they rely on were built for predictability.

Then there is the added catch that traditional security and compliance frameworks are like an old radar, falling short in managing AI risk.

The result is a growing challenge for public sector leaders. How do organisations let AI fly while keeping the skies safe by ensuring accountability, transparency, and oversight remain intact?

The answer isn’t grounding innovation. It’s building a modern control tower: embedding accountability, transparency, and auditability into every AI interaction, rather than viewing it as an afterthought. Because in government, innovation without trust isn’t progress — it’s turbulence.

The agentic era has led to governance gaps

Generative and agentic AI has exposed a fundamental governance gap. While adoption continues to explode across both government and enterprise, oversight mechanisms haven’t evolved at the same pace.

Agentic AI systems, capable of reasoning, planning, and executing tasks across multiple applications and datasets, introduce levels of autonomy that traditional frameworks were never designed to manage.

Unlike conventional software that follows predefined rules, agentic AI learns and adapts over time. Able to interact with systems ranging from application programming interfaces (APIs) to consuming diverse datasets and then initiate actions throughout the organisation. This creates new risks, from unintended decisions and data exposure to vulnerabilities emerging at runtime rather than during development.

Legacy compliance approaches, which rely on periodic audits or static policy controls, cannot provide the continuous oversight required for AI systems operating in real time. Governance must therefore shift from retrospective assessment to active, embedded oversight.

From reactive compliance to embedded governance

A more effective model centres on allocation-aware governance. Instead of separating security, compliance, and AI oversight into distinct silos, organisations need to integrate governance directly into the digital environment where AI operates.

This approach recognises that AI risk does not reside solely within the model itself. It emerges from interactions between agents, applications and data. APIs, for example, are increasingly acting as structured gateways through which AI agents access services and execute actions. By embedding governance at these interaction points, agencies can enforce policies dynamically without slowing innovation.

Application-aware governance reframes compliance as an operational capability. Policies become enforceable guardrails that guide behaviour in real time rather than static requirements reviewed after deployment.

Accountability through visibility

Trust in AI depends on transparency, but transparency cannot rely solely on explainability reports or documentation. Government organisations need visibility into how AI systems behave during operation.

Agentic systems often generate complex chains of activity, querying data sources, interacting with external tools, and triggering automated processes before producing an outcome. Without comprehensive visibility into these interactions, oversight becomes impossible.

Runtime auditability provides a solution. By capturing detailed records of AI actions, organisations can maintain accountability without sacrificing efficiency. Observability tools allow agencies to monitor behaviour continuously, identify anomalies, and enforce governance policies.

This level of insight is not only essential for compliance, but also for security. Autonomous AI expands the attack surface by interacting across multiple environments. Continuous monitoring ensures that risks can be identified and addressed quickly, preventing small issues from escalating into major accidents.

Where traditional frameworks fall short

Traditional governance frameworks focus primarily on infrastructure security and data protection. While these remain critical, they are insufficient for AI environments where risk increasingly lies in behaviour rather than infrastructure.

An AI system may meet technical security requirements, yet still produce biased, harmful, or non-compliant outcomes. Governance must therefore evolve to assess intent, context, and impact. This requires monitoring how decisions are made and how outputs affect real-world processes.

Another limitation is the static nature of many compliance models. AI systems evolve over time as they encounter new data and scenarios. Governance must be continuous and adaptive, embedding oversight throughout the lifecycle of AI rather than treating compliance as a fixed checklist completed during deployment.

Building trust

For all organisations — but particularly important for government and public agencies — trust is the foundation of digital transformation. Citizens expect fairness, accountability, and transparency from systems that influence public services or policy decisions. Strong AI governance enables agencies to meet these expectations while continuing to innovate.

Embedding governance into AI interactions creates a framework where innovation and trust reinforce each other. Instead of slowing adoption, governance becomes an enabler, allowing agencies to experiment confidently while maintaining oversight.

This approach also supports broader policy goals around responsible AI, ethical deployment, and risk management. By designing governance into systems from the beginning, agencies can reduce reputational risk, improve compliance outcomes, and importantly, strengthen public confidence.

The TLDR is that AI governance must evolve alongside the technology it seeks to oversee.

Static compliance models built for predictable software cannot address the complexities of adaptive, autonomous systems. The future lies in governance frameworks that are application-aware, data-aware, and agent-aware, with accountability, transparency, and auditability as foundational throughout all AI workflows.

There can’t be a reliance on legacy control towers built for prop planes, when the speed and complexity of AI demands real-time oversight that can keep the airways safe.

Innovation and trust are not opposing forces. In this next phase of digital government, success will be defined by ensuring innovation never comes at the cost of trust.

Image credit: iStock.com/Natee127

Related Articles

Public sector AI use surging but slowed by system fragmentation: survey

A survey has revealed that Australian public sector AI adoption is ramping up, but progress is...

The IT acquisition that puts Canberra in a tough spot

With reports of excessive price hikes and other problems since Broadcom's acquistion of...

Comparing NZ and Australia on AI: adoption-first versus guardrails-first

In Wellington and Canberra the approach to AI policy has been somewhat different.


  • All content Copyright © 2026 Westwick-Farrow Pty Ltd