Why sovereign AI is becoming a strategic priority in Australia
Artificial intelligence (AI) powered by large language models (LLMs) has the potential to transform how governments serve their citizens, but it opens up challenges around keeping sensitive national data within Australia’s borders. Open-source AI models may be the answer.
Like the business community, governments around Australia are navigating the rapid evolution of AI and the emergence of game-changing technologies like generative AI (GenAI), which have the potential to transform how businesses, individuals and government agencies alike operate.
The possibilities are boundless. As noted by the federal government’s Digital Transformation Agency, GenAI presents new and innovative opportunities for the public sector; for instance, improving decision-making, leveraging automation and providing more tailored citizen services.
However, as AI becomes more pervasive it can also present risks, according to Australia’s Department of Industry, Science and Resources. One of the foremost risks arising from AI in government is the potential exposure of sensitive national data beyond Australia’s borders.
Securing sovereignty in AI
The public sector in Australia has long been aware of the need to maintain a certain level of national control over the data it handles, leading to sovereignty requirements for many of its cloud hosting and digital services arrangements aimed at keeping information within Australia. This strategic approach safeguards national security, local regulations and economic interests.
While sovereign cloud capabilities are well established in Australia, it’s time we establish similar controls for AI, elevating the idea of ‘sovereign AI’ to become a national priority. Why? Because LLMs use enormous volumes of data. If they’re hosted in other countries, as many are, this data ends up going beyond Australia’s borders, and it’s not always clear where it may end up.
Even if an LLM is hosted locally, the data flows that feed it may send information elsewhere before it is processed closer to home. Such is the nature of proprietary GenAI models created and operated by private industry. They are often a black box, protected by intellectual property laws and unable to be properly examined to fully understand how data is used or where it goes.
For governments, this lack of transparency presents a challenge: how to make the most of GenAI while keeping national data safe and secure. Along with data sovereignty, governments need AI sovereignty. That’s why public sector organisations — and those in regulated industries — should start thinking about sovereign AI now, not later.
Time to look under the hood of AI models
One effective solution to support sovereign AI is open-source AI models. Unlike the LLMs operated by big tech players in other countries, open-source models such as LLaMA, Falcon, Qwen and Mistral, along with open-source tooling, can provide a compelling alternative for governments wanting to reap the benefits of AI without risking their data.
Open-source AI models support sovereign AI because those using them have full visibility of the code behind it. Open-source AI is underpinned by the open source software paradigm, in which code is designed to be publicly accessible — anyone can see, modify and distribute the code as they see fit.
This means that those using open-source AI models can obtain a full understanding of where their data is going and how it is being used. It also means they have the ability to tweak AI models accordingly to ensure that data governance measures remain intact, even while being processed by LLMs.
Properly implemented, open-source AI platforms allow full visibility into the data flows and logic that drive AI outputs, and offer the flexibility to innovate faster through collaboration with a global community of open source programmers. The result is a new class of AI stacks designed around openness that have the flexibility to support vertical-specific models built for government and other industries.
Three pillars for a sovereign AI
GenAI and the technology that underpins it is a complex affair, and maintaining true sovereignty in AI requires broader considerations that include hardware and operational factors. Open AI models are an important first step to sovereign AI, but technology sovereignty and operational sovereignty are just as important as data sovereignty in the overall mix.
Technology sovereignty, for instance, refers not only to visibility into model architecture, training data and system behaviour, but also to control over the hardware and platforms on which these GenAI models run. Achieving technology sovereignty means being able to develop and deploy AI models on infrastructure that is both trusted and locally governed.
Likewise, operational sovereignty means ensuring that AI systems can be managed by locally trusted personnel with the appropriate skills and clearance. This includes building a talent pipeline of AI engineers and associated skills, such as MLOps and cybersecurity, as well as reducing reliance on foreign managed service providers.
Regardless of the challenges, the emerging need for sovereign AI in Australia should be a wake-up call for public sector entities. We’re entering a critical phase where the capabilities of AI will help to define national competitiveness and resilience. Those that succeed will be those with systems that are most aligned with their strategic priorities and needs.
![]() |
AI innovation must be national priority to secure Australia’s AI future
This is a defining moment for Australia to act and invest with an infrastructure-first approach,...
Sovereign AI for Australia: defining and delivering our future
The question for government is how to ensure AI strengthens our autonomy, protects our citizens,...
The modern ERP requires modern mindsets
How the Australian public service can transform ERP challenges through workflow orchestration.

