Skip to content

[Request] Citations in the security solution AI assistant #6473

@KDKHD

Description

@KDKHD

Description

What:
We are introducing citations to the security solution AI assistant. In-text citations appear in the LLM response when fact providers (such as the knowledge base or alert information) are used by the LLM to generate the response.

Citations appear in the response as seen below. The user can hover over the superscript element ([#]) and a popover appears that contains a hyperlinked label. Clicking on the label opens a new tab containing information about the source.

Image

Settings menu changes

  • Citations can be enabled and disabled in the settings menu or the keyboard shortcut option c on mac and alt c on windows.
  • Furthermore, the toggle to show/hide anonymized values has been replaced with a switch inside of the settings menu. This has the keyboard shortcut option a on mac and alt a on windows.
Image

Why:
There are a few reasons why we added this:

  1. Previously, it was difficult for the end user to know which sections of the LLM response were based on facts and which were "invented" by the LLM. The in-text citations make it clear to the user which sections are based on which fact providers.
  2. Previously, if a user wanted to view the information used by the LLM, they would not be able to do that easily. Now they can click on the citation and the index opens up.
  3. Closer feature parity with the observability AI assistant that already surfaces citations.

How:

See the PR description for a high-level overview of how it works.

Background & resources

Which documentation set does this change impact?

ESS and serverless

ESS release

The PR for this feature has not been merged yet. The feature will be merged before the 9.0 feature freeze (Jan 29th) but it will be behind a feature flag. Once the documentation is finished and we have completed our testing, we will enable the feature flag.

Serverless release

We are planning to enable the feature flag on the 24th of February 2025.

Feature differences

The feature is the same across both platforms.

API docs impact

The openAPI spec for all of the changed endpoints is updated within the open PR. The following endpoint responses have changed:

/internal/elastic_assistant/actions/connector/{connectorId}/_execute (internal)
  • This is an internal endpoint

POST /api/security_ai_assistant/chat/complete
  • The request has not changed
  • The response has changed however there is no type definition for it in the documentation.

GET /api/security_ai_assistant/current_user/conversations/_find
  • The request has not changed
  • The response property data.messages has changed. There is a new property within a message called metadata that contains information about the citations (the contentReferences key). The metadata property only exists if the message contains citations. This is an example message response:
Response from /api/security_ai_assistant/current_user/conversations/_find
{
    "perPage": 20,
    "page": 1,
    "total": 15,
    "data": [
        {
            "timestamp": "2025-01-14T14:26:07.938Z",
            "createdAt": "2025-01-14T14:26:07.938Z",
            "users": [
                {
                    "id": "u_mGBROF_q5bmFCATbLXAcCwKa0k8JvONAwSruelyKA5E_0",
                    "name": "elastic"
                }
            ],
            "title": "2024 Global Threat Report Authors and Reference",
            "category": "assistant",
            "apiConfig": {
                "connectorId": "gpt4oAzure",
                "actionTypeId": ".gen-ai"
            },
            "messages": [
                {
                    "timestamp": "2025-01-14T14:26:27.846Z",
                    "content": "Who are the authors of the 2024 global threat report. List all of the names and give the reference",
                    "role": "user"
                },
                {
                    "timestamp": "2025-01-14T14:26:31.119Z",
                    "content": "The authors of the 2024 Global Threat Report are researchers and engineers with expertise as intelligence analysts, malware reverse engineers, and detection scientists from Elastic Security Labs. They have analyzed threats to help discover the most effective methods to mitigate them.!{reference(IgxEl)}",
                    "role": "assistant",
                    "metadata": {
                        "contentReferences": {
                            "IgxEl": {
                                "knowledgeBaseEntryName": "global_threat_report_entry",
                                "knowledgeBaseEntryId": "5e981c00-26bb-497b-a3ee-feb8bfe19bfc",
                                "id": "IgxEl",
                                "type": "KnowledgeBaseEntry"
                            }
                        }
                    },
                    "traceData": {
                        "traceId": "268c392fb24d2e3567918df79f827c95",
                        "transactionId": "8ec81eeb08b45f21"
                    }
                }
            ]
        }
    ]
}

GET /api/security_ai_assistant/current_user/conversations/:id
  • Request has not changed
  • The response property messages has changed. There is a new property within a message called metadata that contains the new key contentReferences that has information about the citations. The metadata property only exists if the message contains citations. This is an example message response:
Response from /api/security_ai_assistant/current_user/conversations/:id
{
    "timestamp": "2025-01-14T14:26:07.938Z",
    "createdAt": "2025-01-14T14:26:07.938Z",
    "users": [
        {
            "id": "u_mGBROF_q5bmFCATbLXAcCwKa0k8JvONAwSruelyKA5E_0",
            "name": "elastic"
        }
    ],
    "title": "2024 Global Threat Report Authors and Reference",
    "category": "assistant",
    "apiConfig": {
        "connectorId": "gpt4oAzure",
        "actionTypeId": ".gen-ai"
    },
    "messages": [
        {
            "timestamp": "2025-01-14T14:26:27.846Z",
            "content": "Who are the authors of the 2024 global threat report. List all of the names and give the reference",
            "role": "user"
        },
        {
            "timestamp": "2025-01-14T14:26:31.119Z",
            "content": "The authors of the 2024 Global Threat Report are researchers and engineers with expertise as intelligence analysts, malware reverse engineers, and detection scientists from Elastic Security Labs. They have analyzed threats to help discover the most effective methods to mitigate them.!{reference(IgxEl)}",
            "role": "assistant",
            "metadata": {
                "contentReferences": {
                    "IgxEl": {
                        "knowledgeBaseEntryName": "global_threat_report_entry",
                        "knowledgeBaseEntryId": "5e981c00-26bb-497b-a3ee-feb8bfe19bfc",
                        "id": "IgxEl",
                        "type": "KnowledgeBaseEntry"
                    }
                }
            },
            "traceData": {
                "traceId": "268c392fb24d2e3567918df79f827c95",
                "transactionId": "8ec81eeb08b45f21"
            }
        }
    ],
    "updatedAt": "2025-01-14T14:40:31.304Z",
    "replacements": {
        
    },
    "namespace": "default",
    "id": "6deedd23-4ec4-438b-9fc9-af46c55e02da"
}

PUT /api/security_ai_assistant/current_user/conversations/{id}
  • The request changed to support updating the metadata of a message.

Prerequisites, privileges, feature flags

  • Citations are accessible if the user has access to the security AI assistant. Citations will appear when the AI assistant references content in the users knowledge base, alerts information, security labs content, etc...

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions