Updating the Essential Eight for the age of artificial intelligence

BeyondTrust

By Morey Haber*
Monday, 16 March, 2026


Updating the Essential Eight for the age of artificial intelligence

Australia’s Essential Eight cybersecurity framework has long been regarded as a practical and effective blueprint for improving organisational cyber resilience.

Developed by the Australian Signals Directorate (ASD), the framework focuses on fundamental controls such as application control, patch management, restricting administrative privileges, multi-factor authentication and robust backup processes. These measures have helped organisations reduce exposure to common cyber threats and operational vulnerabilities.

However, the rapid rise of artificial intelligence (AI) across government and enterprise systems is reshaping the threat landscape and exposing a gap in the existing guidance. As AI becomes embedded in software platforms, autonomous agents, and infrastructure, the Essential Eight must evolve to address the new risks associated with AI-enabled technologies.

To be clear, the original framework was developed before the modern surge in AI adoption. While the Essential Eight remains effective in principle, it does not explicitly address governance and security considerations surrounding AI systems. Rather than rewriting the framework entirely, existing categories could be expanded and adapted to incorporate AI-specific security controls.

AI is a force multiplier and a new attack surface

AI has quickly become both a powerful productivity tool and a potential weapon for cybercriminals. Organisations increasingly rely on AI within cloud services, security analytics, development pipelines and customer engagement platforms. In more advanced deployments, autonomous AI agents (agentic AI) can analyse data, generate code, automate workflows and make operational decisions with minimal human intervention.

This capability creates efficiencies but also introduces new risks. AI systems can access sensitive internal data, trigger automated actions, and interact with critical infrastructure. If compromised, they could leak confidential information, execute malicious code, or manipulate decision-making processes. Consider this recent penetration test as a case study for these emerging attack vectors. In addition, AI systems are also susceptible to threats unique to machine learning environments, including prompt injection, token theft, data poisoning, and model manipulation that are only going to expand in complexity and surface area.

Today, AI should be treated as a distinct middleware layer within modern IT environments. Just as organisations once struggled with poorly controlled administrator accounts, unmanaged AI identities could introduce similar vulnerabilities if not properly governed.

Treating AI as an identity

A key shift is the recognition that AI systems effectively function as new forms of digital identity beyond traditional non-human or machine identities. Unlike traditional software, AI agents can act autonomously, access multiple systems, and perform tasks previously reserved for privileged human users. For this reason, organisations must treat AI identities with the same level of scrutiny applied to human users and machine accounts but with the unique attributes they possess. This includes monitoring how AI systems authenticate, what resources they can access, and what actions they are authorised or entitled to perform.

When examined closely, most AI environments rely on three core components: data, identity, and automation based on privileges. These elements align closely with the underlying security principles already embedded in the Essential Eight. Hence our original recommendation: mend the Essential Eight in lieu of replacing or rewriting them.

Extending the Essential Eight

As AI continues its pervasive momentum into Australian government agencies and businesses, the Essential Eight’s core strategies can be expanded to accommodate AI-driven environments.

Application control, for example, traditionally prevents unauthorised software from executing within networks. In AI environments, this concept can be extended to regulate which agent AI code is allowed to operate, what APIs they can access, what privileges are assigned to runtime, and which automated workflows they are permitted to trigger.

The principle of least privilege, already central to restricting administrative access, becomes even more critical when AI systems interact with multiple platforms in the form of agentic AI. AI agents often connect to CRM systems, code repositories, and cloud infrastructure. If granted excessive permissions, they could become powerful entry points for threat actors. Security frameworks must therefore ensure AI agents operate with strictly limited access rights, supported by monitoring tools and mechanisms such as just-in-time access, temporary credentials and activity logging.

This therefore becomes an addendum to the Essential Eight to not only enforce least privilege for human operators but also agent AI deployments as well.

Authentication and accountability

Traditional security frameworks rely heavily on multi-factor authentication (MFA) to protect user accounts. However, AI identities do not interact with authentication systems in the same way humans do. MFA does not exist for agent AI. As a result, agencies and organisations must develop alternative controls for agent AI identity confidence.

These include ensuring that all AI identities are fully documented, monitored and assigned a responsible human owner. Sensitive credentials used by AI systems — such as OAuth tokens or API keys — should be short-lived and cryptographically bound to specific resources or contexts. These controls can serve as a functional equivalent to MFA for non-human identities.

Again, this leads to an expansion of existing Essential Eight controls in lieu of writing new ones.

Protecting AI data and models

The Essential Eight also emphasises backups and recovery as safeguards against ransomware and destructive cyberattacks. In AI environments, backups serve an additional purpose: protecting the integrity of data and models from a wide variety of attacks.

AI systems rely heavily on training data and model configurations. If attackers manipulate these inputs through data poisoning or prompt injection, the system may generate misleading insights or faulty decisions. Secure backups that include data lineage records, version control and immutable storage are essential for restoring trustworthy AI operations after a compromise far beyond the initial intent of recovery from a potential ransomware attack.

A framework that can easily evolve

The Essential Eight has been successful precisely because it focuses on practical, achievable security controls rather than theoretical frameworks. The emergence of AI does not invalidate these principles but rather demands that they evolve.

Cybercriminals are already leveraging AI to accelerate reconnaissance, personalise phishing attacks, identify vulnerabilities and automate exploit development. As AI capabilities become widely accessible, the speed and sophistication of cyber threats will continue to increase. For agencies and organisations relying on the Essential Eight, the challenge is not abandoning the framework but extending it to address the realities of modern technology and understanding its core intent and how it can be applied to emerging technologies like AI.

*Morey Haber is the Chief Security Advisor at BeyondTrust and has more than 25 years’ IT industry experience. During this time, he has authored four books: Privileged Attack Vectors, Asset Attack Vectors, Identity Attack Vectors, and Cloud Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. He currently oversees BeyondTrust security and governance for corporate and cloud-based solutions.

Top image credit: iStock.com/da-kuk

Related Articles

Rethinking endpoint security: the overlooked risk in hybrid public sector work

As we approach Data Privacy Week, it's an opportune moment for public sector agencies to...

How the Australian Government can boost cybersecurity awareness

With the right messaging, the Australian Government has an opportunity to foster a...

Protecting the infrastructure behind Australia's AI ambitions

AI will shape Australia's future economy and its public institutions, but that future depends...


  • All content Copyright © 2026 Westwick-Farrow Pty Ltd