Elastic publishes security guidance for AI LLMs
Search AI company Elastic has published new guidance aimed at helping organisations avoid security risks posed by AI large language models (LLMs).
Subsidiary Elastic Security Labs’ new report includes attack mitigation best practices and suggested countermeasures for LLM abuse. The guidance builds and expands on recent Open Web Application Security Project (OWASP) research detailing the most common LLM attack techniques.
Countermeasures explored in the research cover different areas of enterprise architecture that developers should adopt while building LLM-enabled applications. The research also includes a set of dedicated detection rules for LLM abuses.
Elastic head of threat and security intelligence Jake King said the explosion in use of generative AI tools and the LLM they are trained on has expanded the attack surface and left developers and security teams without clear guidance on how to adopt emerging LLM technology safely.
“For all their potential, broad LLM adoption has been met with unease by enterprise leaders, seen as yet another doorway for malicious actors to gain access to private information or a foothold in their IT ecosystems,” he said. “Publishing open detection engineering content is in Elastic’s DNA. Security knowledge should be for everyone: safety is in numbers. We hope that all organisations, whether Elastic customers or not, can take advantage of these new rules and guidance.”
The report ‘LLM Safety Assessment: the Definitive Guide on Avoiding Risk and Abuses’ can be found here.
CyberCX to be bought out by Accenture
Accenture has arranged to make its largest cybersecurity acquisition to date through the purchase...
BeyondTrust completes IRAP assessment
CyberCX has helped BeyondTrust pass an Infosec Registered Assessors Program assessment for its...
CyberArk completes IRAP assessment
CyberArk's Identity Security Platform has been found to comply with the standards of the...