Fortifying cyber defence with AI: incentivising adoption through policy

Palo Alto Networks

By Sarah Sloan*
Wednesday, 29 May, 2024


Fortifying cyber defence with AI: incentivising adoption through policy

In the ongoing debate surrounding artificial intelligence (AI), the spotlight often falls on its potential for harm — from dystopian visions of killer robots to the proliferation of deep fakes that sow discord and confusion. Indeed, these concerns warrant careful consideration and should rightly be a part of the policy conversation when it comes to AI. However, amidst the fervour surrounding AI’s darker implications, it’s crucial to recognise the transformative potential AI holds for enhancing Australia’s cybersecurity posture.

The imperative of AI-driven cyber defence

The cybersecurity industry has a front-row seat to the ever-changing cyber threat landscape, and the picture we see is undeniably concerning.

Adversaries are leveraging AI to innovate and enhance their tactics, leading to a relentless game of cat and mouse between defenders and attackers. For instance, they are leveraging AI to improve social engineering tactics, such as phishing emails, thereby increasing the success rates of their campaigns. Additionally, bad actors are using AI to accelerate and scale attacks, execute simultaneous assaults across multiple vulnerabilities and facilitate faster lateral movement within networks.

Unlike human adversaries, AI-powered threats operate without the limitations of sleep, breaks or distractions. They can move at machine speed, exploit vulnerabilities and compromise multiple targets simultaneously, posing unprecedented challenges to organisations.

This malicious use of AI presents a formidable challenge to our cyber defences and requires a proactive and adaptive response. However, amidst this backdrop of looming threats lies an unprecedented opportunity to harness the power of AI for good — to fight machine with machine.

The advent of AI-powered cyber defence has ushered in a new era of resilience and proactivity in safeguarding our digital ecosystems.

AI empowers security professionals with real-time visibility, enabling swift detection and response to cyber threats. By automating tedious tasks and streamlining data analysis, AI-driven security operations centres can significantly reduce mean time to detect and respond to incidents, providing a decisive advantage in the ongoing battle against cyber adversaries.

Moreover, AI isn’t merely a defensive tool but a proactive force in identifying and mitigating vulnerabilities before they’re exploited. Through AI-driven vulnerability discovery, organisations can gain comprehensive insights into their attack surfaces, enabling targeted remediation efforts and bolstering resilience against emerging threats.

Through continuous innovation and investment, we’ve witnessed remarkable outcomes, including the detection of 2.3 million unique attacks daily and the prevention of 11.3 billion total attacks each day. These numbers underscore the pivotal role of AI in staying ahead of adversaries and fortifying our cyber defences.

Maximising AI’s potential through policy

In May, the Senate Select Committee on Adopting Artificial Intelligence (AI) closed submission to its inquiry into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia. This inquiry reflects the growing recognition by all levels of government of AI’s importance to our national security and economic prosperity. It represents a unique opportunity to move beyond the fearmongering and position Australia as a world leader in AI and its adoption — setting Australia’s future generations up for success.

And it’s clear that when it comes to cybersecurity, the riskiest outcome for society is not leveraging AI for defensive cyber purposes.

To maximise the potential of AI for cyber defence, policymakers must adopt a risk-based and stakeholder-involved approach with a focus on the following key pillars.

Alignment with global standards

Australia’s alignment with global standards is paramount in ensuring effective AI governance. While AI presents unique challenges and opportunities, it is not an isolated phenomenon exclusive to Australia. Therefore, harmonising Australia’s regulatory framework with those of other countries or multilateral organisations is essential to facilitate global cooperation, promote interoperability and ensure consistency in AI governance. By aligning our AI regulations with international best practices, we can foster innovation, enhance competitiveness and uphold ethical standards in AI development and deployment.

Employ a risk-based approach

Policymakers should employ a risk-based approach when considering AI guardrails that account for differences in the use cases and the potential impacts on individuals. There are fundamental differences in risk between AI systems that, for example, leverage consumer data to make or facilitate consequential decisions with human impact, as compared with those that leverage security data to ensure the robustness and resilience of networks. By carefully considering the varied nature of AI use cases, policymakers can ensure that any new guardrails do not unintentionally inhibit the use of AI-powered tools for cyber defence.

Embracing AI as a defence asset

In response to the ever-evolving cyber threat landscape, it is crucial to embrace AI as a powerful asset within our cyber defence arsenal. To expedite the integration of AI-enhanced cyber defence, the Australian Government must enact policies that foster swift adoption across the economy. Adopting a risk-averse stance to its deployment in cybersecurity is incompatible with the urgency of the security landscape.

Leveraging AI for cyber defence isn’t just a choice but a necessity.

By embracing innovation, fostering collaboration and adopting a risk-centric approach to governance, we can harness the transformative power of AI to build a safer and more secure digital future for all.

Together, let’s seize the opportunity to turn the tide against cyber threats and usher in a new era of resilience and prosperity in the digital domain.

*Sarah Sloan is Head of Government Affairs and Public Policy, ANZ and Indonesia, at Palo Alto Networks.

Image credit: iStock.com/burcu demir

Related Articles

Phishing‍-‍resistant MFA: elevating security standards in the public sector

Phishing remains a significant issue for government agencies, and current MFA solutions often...

Building secure AI: a critical guardrail for Australian policymakers

While AI has the potential to significantly enhance Australia's national security, economic...

Building security‍-‍centric AI: why it is key to the government's AI ambitions

As government agencies test the waters of AI, public sector leaders must consider how they can...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd