Skip to content

.Net: [USER STORIES] Correlated filters across entire agent lifecycle #6597

Open

Description

In Semantic Kernel, it should be possible to create filters that can be correlated across the entire agent lifecycle:

  1. Invoke agent
  2. Render agent prompt template
  3. Call functions within prompts
  4. Serialize prompt to Chat History object
  5. Choose model
  6. Prepare execution settings for LLM
  7. Send Chat History object to LLM
  8. Receive raw result from LLM
  • Respond to new message
  • Respond to new function call(s)
  • Respond to termination signal
  1. Append new message(s) to chat history
  2. Repeat steps 1-9 until termination signal has been received

The following are scenarios where having a way to correlate filters across the entire lifecycle

Caution

These scenarios may be solvable without full lifecycle filters, making some of these requirements unnecessary.

  • Telemetry with correlation IDs – For customers that use correlation IDs (e.g., 1.2.1), to trace the entire process, they'll need a way to hook into each step. This includes steps that could be addressed with things like custom HTTP handlers, because these steps wouldn't have context of the previous correlation ID to increment.
  • Detect and remove PII that hasn't been shared – To determine which PII to remove, the developer needs to first check which PII has already been shared by the user. To achieve this, a filter needs to be able to analyze the initial prompt, user messages, and previous function calls to determine what doesn't need to be scrubbed. Afterwards, the developer needs to compare PII coming from other function calls or the LLM response to determine if they are safe or not. For example, an agent may have the two function 1) get_current_user_details and 2) get_notes_about_user. The second one could accidentally grab information about other customers, the developer should be able to compare it with data that's already been shared in the chat history to determine this.
  • Change available plugins based on previous data – Earlier in the lifecycle, the developer may have determined which state the AI is in (e.g., perfoming_work, responding_to_user, authoring_text, generating_media, etc.). The developer could then use this information to change downstream behaviors: which prompt template to use, the model, the available plugins, levels of content safety, how the chat history is updated, etc. For example, if the AI is a "responding_to_user" mode, the developer may choose to use a simpler prompt template, a local model, different execution settings, simpler (or no) plugins, and a quick validation by a more expensive model before adding it to the chat history.

Full lifecycle filters will also simplify adding of out-of-the-box filters to a kernel. Instead of adding individual filters, devs could just add a single Lifecycle filter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    .NETIssue or Pull requests regarding .NET codekernelIssues or pull requests impacting the core kernelsk team issueA tag to denote issues that where created by the Semantic Kernel team (i.e., not the community)

    Type

    No type

    Projects

    • Status

      No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions