The AI advantage being seen in Australian SOCs

Check Point Software Technologies Ltd

By Les Williamson, ANZ Regional Director, Check Point Software Technologies
Wednesday, 25 June, 2025


The AI advantage being seen in Australian SOCs

While value creation is often an overriding interest of organisations in the current enterprise AI era, there are non-trivial concerns about the ways that AI can be weaponised. This often takes the form of threat actors harnessing AI to enhance their own tradecraft. Recent research by Check Point shows these concerns are materialising and have manifested in observed attack vectors and patterns.

AI threats are no longer theoretical; they are here and evolving rapidly. From autonomous and interactive social engineering across text, audio and video, to the automation of malware development and data mining, and the emergence of jailbroken large language models (LLMs) for criminal use, the impact of AI on the threat landscape is being felt.

This is of course a learning opportunity for practitioners, but it’s not just learning about evolutions in attack patterns. The emphasis is increasingly on learning the role that AI can play in helping defenders ward off AI-enhanced threats. AI is becoming a key part of the operations playbook for all sides of the cybersecurity landscape, and this is making things interesting.

The use of AI in cybersecurity is being explored in various ways such as enhancing detection mechanisms, hunting for advanced threat actors, vulnerability research and making complex systems more accessible.

Leveraging AI, particularly LLMs or agentic frameworks, researchers and SecOps teams can efficiently conduct data collection and preliminary analysis. These systems can execute tests, gather data, and perform initial analyses, allowing practitioners to concentrate on higher-level strategic considerations. As AI progresses, it will assume the role of a constant investigative assistant, revealing key insights and unusual behaviours and directing analysts more quickly than ever before.

Emerging use cases for AI in cybersecurity

There are four emerging use cases for AI in cybersecurity research and operations that are worth examining in a bit more detail.

1. Fake website detection and evaluation

AI is already having an impact on detection and alerting of websites that mimic government, financial or security organisations to harvest users’ account credentials. Australian government organisations that deal with payments have repeatedly been targeted by scammers setting up fake websites.

The integration of LLMs into big data pipelines is enabling analysts to automate the identification of impersonation and thematic deception at scale. LLMs can analyse millions of domain registrations, flagging those mimicking official institutions, and detecting filenames structured to deceive users. It’s not just about finding the threats: using natural language understanding, LLMs can assess how convincing the fake is and the likelihood of a non-technical user interacting with it.

2. Extracting and correlating data from reports

Threat intelligence reports offer valuable insights into APT groups operations, but manually extracting tactics, techniques, and procedures (TTPs) for hunting rules is time consuming. AI is streamlining this process by automatically identifying attack patterns, mapping them to common security frameworks like MITRE ATT&CK, and generating structured hunting rules. This allows analysts to quickly convert raw threat intelligence into actionable defences, improving threat-hunting efficiency and reducing time-to-detection for specific threats and malware.

AI is also helping cybersecurity teams to correlate unstructured data from external reports or text linked to threat actors. By centralising unstructured information and making it accessible to an LLM, analysts can extract key insights from new research and cross-reference them with historical operations, improving attribution and threat analysis.

3. AI for malware analysis

Automating tasks such as conducting malware analysis is more accessible in the AI era. Decompiling malware code and feeding it into an LLM can produce impressive results so far. LLMs can sometimes determine whether code is malicious even when detection rates are very low.

Vulnerability research is often repetitive: examining capabilities, functionality, attack surfaces, documentation, and relevant standards applied to a technology platform, software or device. Routine tasks that an individual would typically perform can be completed much more quickly with an LLM, freeing up practitioners’ time.

Most of the necessary technology for this exists. The challenge lies in integrating these technologies into a cohesive workflow and implementing additional safeguards to ensure accuracy. This remains the biggest bottleneck at the moment, although it is improving rapidly. A clear example of this progress is the increasing proficiency of LLMs in writing code.

The security specificity of the AI will also deepen over time, leading to further gains. While most LLMs today are generic, advancements suggest that a DeepSeek-like model can be created with added reasoning layers focused on computer science topics like coding, low-level assembler knowledge, reverse engineering and vulnerability analysis.

4. Identifying logic security flaws with AI

An emerging use case for AI in defensive contexts is harnessing the advanced reasoning capabilities of LLMs to identify logic security flaws in code bases, a previously difficult task to automate.

Teaching computers what a ‘logic bug’ is has been challenging, but as LLM reasoning improves, they can better understand and detect these issues. In the future, teams might use LLMs to automate the process of finding logic flaws.

In pursuit of an AI advantage

AI and automation are now essential components of a cybersecurity tool stack. In more and more cases, AI is showing itself to be the most valuable tool available to analyse, detect, prevent and predict threats, with systems able to learn from and adapt to new threats continuously.

Given the direction of the sector and of technology, it makes sense for security teams to match the pace of attackers by integrating AI into their defences. Security teams that do so are improving their capability and capacity to defend and protect a broad attack surface, while also driving productivity improvements within their existing resourcing envelope.

Top image credit: iStock.com/da-kuk

Originally published here.

Related Articles

Securing Australia's digital future: identity security as a national priority

Government agencies in Australia must move beyond compliance minimums and embrace comprehensive...

Reflecting on the government DeepSeek ban

The AI arms race, especially between major players like DeepSeek, Alibaba, ByteDance and their US...

Demystifying zero trust for government

As zero trust becomes more central to ICT environments, it needs to be considered not just as an...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd