Elastic publishes security guidance for AI LLMs
Search AI company Elastic has published new guidance aimed at helping organisations avoid security risks posed by AI large language models (LLMs).
Subsidiary Elastic Security Labs’ new report includes attack mitigation best practices and suggested countermeasures for LLM abuse. The guidance builds and expands on recent Open Web Application Security Project (OWASP) research detailing the most common LLM attack techniques.
Countermeasures explored in the research cover different areas of enterprise architecture that developers should adopt while building LLM-enabled applications. The research also includes a set of dedicated detection rules for LLM abuses.
Elastic head of threat and security intelligence Jake King said the explosion in use of generative AI tools and the LLM they are trained on has expanded the attack surface and left developers and security teams without clear guidance on how to adopt emerging LLM technology safely.
“For all their potential, broad LLM adoption has been met with unease by enterprise leaders, seen as yet another doorway for malicious actors to gain access to private information or a foothold in their IT ecosystems,” he said. “Publishing open detection engineering content is in Elastic’s DNA. Security knowledge should be for everyone: safety is in numbers. We hope that all organisations, whether Elastic customers or not, can take advantage of these new rules and guidance.”
The report ‘LLM Safety Assessment: the Definitive Guide on Avoiding Risk and Abuses’ can be found here.
ACSC warns of ongoing targeting of online code repositories
The Australian Cyber Security Centre has released a high priority alert regarding the ongoing...
Leaders unite to tackle cybersecurity gender gap
Cybersecurity leaders gathered for the second annual Women in Cyber Security Summit to address...
Ping Identity completes IRAP assessment
Ping Identity has revealed that its PingOne Advanced Identity Cloud IAM solution has passed an...
