Description
Description
We are releasing Elastic Managed LLM as the default large language model (LLM) available out of the box for users of Elastic [Search, Observability, and Security] AI Assistant. This new capability enables customers to leverage generative AI features immediately, without the need to bring their own model keys or manage external LLM connectors. The goal is to reduce adoption friction and streamline productivity for enterprise teams by providing a secure, managed LLM experience.
Key points to document:
-
What is Elastic Managed LLM?
A default, Elastic-hosted LLM pre-integrated across the Elastic platform, available for Security AI Assistant users without additional setup. -
How to access and use:
- The Elastic Managed LLM is automatically available as the default model in the AI Assistant for eligible users and deployments.
- No manual connector setup or API key management is required for initial use.
- Users can still opt to configure and use their own LLM connectors (OpenAI, Azure, Bedrock, etc.) if desired.
-
Benefits:
- Instant access to generative AI features for security use cases.
- Eliminates the need for users to procure/manage third-party API keys.
- Reduces time-to-value for teams adopting AI-powered workflows.
-
Security and data privacy:
- Data sent to the Elastic Managed LLM is encrypted in transit.
- The model is configured for zero data retention—no prompts or outputs are stored.
- Only request metadata (timestamp, model, region, status) is logged for operational purposes.
- Hosted initially in AWS
us-east-1
, with plans for regional expansion.
-
Fallback and configuration:
- Users can select a different LLM provider through the model dropdown if they prefer a custom or third-party LLM.
Resources
- [Elastic Managed LLM documentation](https://www.elastic.co/docs/reference/kibana/connectors-kibana/elastic-managed-llm)
- [AI Assistant documentation](https://www.elastic.co/docs/solutions/security/ai/ai-assistant)
- [LLM performance matrix](https://www.elastic.co/docs/solutions/security/ai/large-language-model-performance-matrix)
- Figma design reference: [o11y] Elastic provided LLM costs
Which documentation set does this change impact?
Elastic On-Prem and Cloud (all)
Feature differences
- Serverless: The Elastic Managed LLM is not visible or available for configuration in serverless environments where project type does not support it.
- On-Prem/Cloud: Available as default; users can opt out or override with their own connector.
What release is this request related to?
N/A
Serverless release
N/A
Collaboration model
The documentation team
Point of contact.
Main contact: @dhru42
Stakeholders: @peluja1012 , @jamesspi , @isaclfreire