Elastic publishes security guidance for AI LLMs
Search AI company Elastic has published new guidance aimed at helping organisations avoid security risks posed by AI large language models (LLMs).
Subsidiary Elastic Security Labs’ new report includes attack mitigation best practices and suggested countermeasures for LLM abuse. The guidance builds and expands on recent Open Web Application Security Project (OWASP) research detailing the most common LLM attack techniques.
Countermeasures explored in the research cover different areas of enterprise architecture that developers should adopt while building LLM-enabled applications. The research also includes a set of dedicated detection rules for LLM abuses.
Elastic head of threat and security intelligence Jake King said the explosion in use of generative AI tools and the LLM they are trained on has expanded the attack surface and left developers and security teams without clear guidance on how to adopt emerging LLM technology safely.
“For all their potential, broad LLM adoption has been met with unease by enterprise leaders, seen as yet another doorway for malicious actors to gain access to private information or a foothold in their IT ecosystems,” he said. “Publishing open detection engineering content is in Elastic’s DNA. Security knowledge should be for everyone: safety is in numbers. We hope that all organisations, whether Elastic customers or not, can take advantage of these new rules and guidance.”
The report ‘LLM Safety Assessment: the Definitive Guide on Avoiding Risk and Abuses’ can be found here.
Half of government agencies falling short on email security measures: report
Lack of consistency across Australian Government bodies leaves critical vulnerabilities in the...
CISA and Microsoft warn of “active attacks” on SharePoint
Alerts have been published active attacks exploiting a remote code execution vulnerability in...
NSW Government agencies have ineffective cybersecurity controls: report
The Audit Office of New South Wales has found that NSW Government agencies still have minimal...