Elastic publishes security guidance for AI LLMs
Search AI company Elastic has published new guidance aimed at helping organisations avoid security risks posed by AI large language models (LLMs).
Subsidiary Elastic Security Labs’ new report includes attack mitigation best practices and suggested countermeasures for LLM abuse. The guidance builds and expands on recent Open Web Application Security Project (OWASP) research detailing the most common LLM attack techniques.
Countermeasures explored in the research cover different areas of enterprise architecture that developers should adopt while building LLM-enabled applications. The research also includes a set of dedicated detection rules for LLM abuses.
Elastic head of threat and security intelligence Jake King said the explosion in use of generative AI tools and the LLM they are trained on has expanded the attack surface and left developers and security teams without clear guidance on how to adopt emerging LLM technology safely.
“For all their potential, broad LLM adoption has been met with unease by enterprise leaders, seen as yet another doorway for malicious actors to gain access to private information or a foothold in their IT ecosystems,” he said. “Publishing open detection engineering content is in Elastic’s DNA. Security knowledge should be for everyone: safety is in numbers. We hope that all organisations, whether Elastic customers or not, can take advantage of these new rules and guidance.”
The report ‘LLM Safety Assessment: the Definitive Guide on Avoiding Risk and Abuses’ can be found here.
SWEAR digital content security platform validated by ESI Convergent
SWEAR has validated its SWEAR Security platform, which aims to help organisations ensure the...
Government invests in counter-drone capabilities for Defence Force
The Australian Government is accelerating the acquisition of counter-drone capabilities for the...
ISACA launches AI-centric certification for security professionals
The Advanced in AI Security Management (AAISM) certification focuses on the implement AI...