Skip to content

Move images to solutions folder #1082

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion reference/apm/observability/apm.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ mapped_pages:

Elastic APM is an application performance monitoring system built on the {{stack}}. It allows you to monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. This makes it easy to pinpoint and fix performance problems quickly.

:::{image} /reference/images/observability-apm-app-landing.png
:::{image} /reference/apm/images/observability-apm-app-landing.png
:alt: Applications UI in {kib}
:screenshot:
:::
Expand Down
8 changes: 4 additions & 4 deletions solutions/observability/applications/llm-observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Check [these instructions](https://elastic.github.io/opentelemetry/use-cases/llm

For an SRE team optimizing a customer support system powered by Azure OpenAI, Elastic’s [Azure OpenAI integration](https://www.elastic.co/guide/en/integrations/current/azure_openai.html) provides critical insights. They can quickly identify which model variants experience higher latency or error rates, enabling smarter decisions on model deployment or even switching providers based on real-time performance metrics.

:::{image} ../../../images/llm-performance-reliability.png
:::{image} /solutions/images/llm-performance-reliability.png
:alt: LLM performance and reliability
:screenshot:
:::
Expand All @@ -53,7 +53,7 @@ For an SRE team optimizing a customer support system powered by Azure OpenAI, El

Consider an enterprise utilizing an OpenAI model for real-time user interactions. Encountering unexplained delays, an SRE can use OpenAI tracing to dissect the transaction pathway, identify if one specific API call or model invocation is the bottleneck, and monitor a request to see the exact prompt and response between the user and the LLM.

:::{image} ../../../images/llm-openai-applications.png
:::{image} /solutions/images/llm-openai-applications.png
:alt: Troubleshoot OpenAI-powered applications
:screenshot:
:::
Expand All @@ -62,7 +62,7 @@ Consider an enterprise utilizing an OpenAI model for real-time user interactions

For cost-sensitive deployments, being acutely aware of which LLM configurations are more cost-effective is crucial. Elastic’s dashboards, pre-configured to display model usage patterns, help mitigate unnecessary spending effectively. You can use out-of-the-box dashboards for metrics, logs, and traces.

:::{image} ../../../images/llm-costs-usage-concerns.png
:::{image} /solutions/images/llm-costs-usage-concerns.png
:alt: LLM cost and usage concerns
:screenshot:
:::
Expand All @@ -71,7 +71,7 @@ For cost-sensitive deployments, being acutely aware of which LLM configurations

With the Elastic Amazon Bedrock integration for Guardrails, SREs can swiftly address security concerns, like verifying if certain user interactions prompt policy violations. Elastic's observability logs clarify whether guardrails rightly blocked potentially harmful responses, bolstering compliance assurance.

:::{image} ../../../images/llm-amazon-bedrock-guardrails.png
:::{image} /solutions/images/llm-amazon-bedrock-guardrails.png
:alt: Elastic Amazon Bedrock integration for Guardrails
:screenshot:
:::
Expand Down
6 changes: 3 additions & 3 deletions solutions/security/ai/ai-assistant.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,14 +87,14 @@ Use these features to adjust and act on your conversations with AI Assistant:
* (Optional) Select a *System Prompt* at the beginning of a conversation by using the **Select Prompt** menu. System Prompts provide context to the model, informing its response. To create a System Prompt, open the System Prompts dropdown menu and click **+ Add new System Prompt…​**.
* (Optional) Select a *Quick Prompt* at the bottom of the chat window to get help writing a prompt for a specific purpose, such as summarizing an alert or converting a query from a legacy SIEM to {{elastic-sec}}.

:::{image} ../../images/security-quick-prompts.png
:::{image} /solutions/images/security-quick-prompts.png
:alt: Quick Prompts highlighted below a conversation
:screenshot:
:::

* System Prompts and Quick Prompts can also be configured from the corresponding tabs on the **Security AI settings** page.

:::{image} ../../images/security-assistant-settings-system-prompts.png
:::{image} /solutions/images/security-assistant-settings-system-prompts.png
:alt: The Security AI settings menu's System Prompts tab
:::

Expand All @@ -119,7 +119,7 @@ AI Assistant can remember particular information you tell it to remember. For ex

To adjust AI Assistant's settings from the chat window, click the **More** (three dots) button in the upper-right.

::::{image} ../../../images/security-attack-discovery-more-popover.png
::::{image} /solutions/images/security-attack-discovery-more-popover.png
:alt: AI Assistant's more options popover
:screenshot:
::::
Expand Down
2 changes: 1 addition & 1 deletion solutions/security/ai/attack-discovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ You need the `Attack Discovery: All` privilege to use Attack Discovery.

By default, Attack Discovery analyzes up to 100 alerts from the last 24 hours, but you can customize how many and which alerts it analyzes using the settings menu. To open it, click the gear icon next to the **Generate** button.

::::{image} ../../../images/security-attack-discovery-settings.png
::::{image} /solutions/images/security-attack-discovery-settings.png
:alt: Attack Discovery's settings menu
:width: 500px
::::
Expand Down
10 changes: 5 additions & 5 deletions solutions/security/get-started/automatic-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,19 +19,19 @@ You can ingest your data before migrating your rules, or migrate your rules firs
* A working [LLM connector](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md).
* {{stack}} users: an [Enterprise](https://www.elastic.co/pricing) subscription.
* {{Stack}} users: {{ml}} must be enabled.
* {{serverless-short}} users: a [Security Complete](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) subscription.
* {{serverless-short}} users: a [Security Complete](/deploy-manage/deploy/elastic-cloud/project-settings.md) subscription.
* {{ecloud}} users: {{ml}} must be enabled. We recommend a minimum size of 4GB of RAM per {{ml}} zone.

::::

## Get started with Automatic Migration

1. Find **Get started** in the navigation menu or use the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Under **Configure AI provider**, select a configured model or [add a new one](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md). For information on how different models perform, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md).
2. Under **Configure AI provider**, select a configured model or [add a new one](/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md). For information on how different models perform, refer to the [LLM performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md).
3. Next, under **Migrate rules & add data**, click **Translate your existing SIEM rules to Elastic**, then **Upload rules**.
4. Follow the instructions on the **Upload Splunk SIEM rules** flyout to export your rules from Splunk as JSON.

:::{image} ../../../images/security-siem-migration-1.png
:::{image} /solutions/images/security-siem-migration-1.png
:alt: the Upload Splunk SIEM rules flyout
:width: 700px
:screenshot:
Expand Down Expand Up @@ -70,7 +70,7 @@ This section describes the **Translated rules** page's interface and explains ho

When you upload a new batch of rules, they are assigned a name and number, for example `SIEM rule migration 1`, or `SIEM rule migration 2`. Use the **Migrations** dropdown menu in the upper right to select which batch appears.

::::{image} ../../../images/security-siem-migration-processed-rules.png
::::{image} /solutions/images/security-siem-migration-processed-rules.png
:alt: The translated rules page
:width: 850px
:screenshot:
Expand Down Expand Up @@ -115,7 +115,7 @@ You cannot edit Elastic-authored rules using this interface, but after they are

Click the rule's name to open the rule's details flyout to the **Translation** tab, which shows the source rule alongside the translated — or partially translated — Elastic version. You can update any part of the rule. When finished, click **Save**.

::::{image} ../../../images/security-siem-migration-edit-rule.png
::::{image} /solutions/images/security-siem-migration-edit-rule.png
:alt: The rule details flyout
:width: 850px
:screenshot:
Expand Down
Loading