Elastic publishes security guidance for AI LLMs
Search AI company Elastic has published new guidance aimed at helping organisations avoid security risks posed by AI large language models (LLMs).
Subsidiary Elastic Security Labs’ new report includes attack mitigation best practices and suggested countermeasures for LLM abuse. The guidance builds and expands on recent Open Web Application Security Project (OWASP) research detailing the most common LLM attack techniques.
Countermeasures explored in the research cover different areas of enterprise architecture that developers should adopt while building LLM-enabled applications. The research also includes a set of dedicated detection rules for LLM abuses.
Elastic head of threat and security intelligence Jake King said the explosion in use of generative AI tools and the LLM they are trained on has expanded the attack surface and left developers and security teams without clear guidance on how to adopt emerging LLM technology safely.
“For all their potential, broad LLM adoption has been met with unease by enterprise leaders, seen as yet another doorway for malicious actors to gain access to private information or a foothold in their IT ecosystems,” he said. “Publishing open detection engineering content is in Elastic’s DNA. Security knowledge should be for everyone: safety is in numbers. We hope that all organisations, whether Elastic customers or not, can take advantage of these new rules and guidance.”
The report ‘LLM Safety Assessment: the Definitive Guide on Avoiding Risk and Abuses’ can be found here.
NSW Auditor-General releases cybersecurity insights report
The Cyber security insights 2025 report identifies that while cybersecurity governance in the NSW...
Genetec updates its physical security SaaS platform
Genetec has announced new capabilities for its Security Center SaaS solution including expanded...
ACSC releases advice on implementing SIEM and SOAR platforms
The ACSC says that implementing SIEM or SOAR platforms can greatly benefit organisations by...