Description
Context
I have an application using the Microsoft Semantic Kernel. The user is able able to create their own Semantic Kernel Functions using the handlebars format, defining the Temperature and Max Tokens of the output of their function themselves as part of the handlebars template. Additionally, the user is able to choose which LLM will be used to invoke the function through a field in the user interface, the id of the LLM to be used is passed to the KernelFunction through a PromptExecutionSettings object. The problem is that the Semantic Kernel Function will only take the Execution Settings from the handlebars template into account when the PromptExecutionSettings object is set to null.
Describe the bug
When creating a kernel function from a Handlebars prompt using kernel.CreateFunctionFromPrompt, if a PromptExecutionSettings object is provided in the executionSettings parameter, any execution settings defined within the Handlebars template's execution_settings block (e.g., max_tokens, temperature) are ignored. This happens even if the provided PromptExecutionSettings object only contains a ServiceId and does not explicitly override settings like max_tokens or temperature.
It appears that providing any PromptExecutionSettings object during function creation causes the entire execution_settings block from the template to be disregarded, rather than being merged with or selectively overridden by the provided PromptExecutionSettings.
To Reproduce
Steps to reproduce the behavior:
- Use Microsoft.SemanticKernel version 1.46.0 and ensure Microsoft.SemanticKernel.PromptTemplates.Handlebars is referenced.
- Configure a Kernel instance with at least one chat completion service (e.g., identified by ServiceId = "gpt-4o-mini").
- Use the following code to define and create a function:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.PromptTemplates.Handlebars;
// --- Setup ---
var kernel = new Kernel(); // Replace with your actual kernel setup
// kernel.AddChatCompletionService(...) // Add your service configuration
// --- Define Prompt with Execution Settings ---
var functionName = "GenerateStory";
var promptWithSettings = """
name: GenerateStory
description: This function generates a long funny story.
template:
Generate a long funny story.
template_format: handlebars
execution_settings:
default:
max_tokens: 10
temperature: 1.0
""";
// --- Define Execution Settings containing ONLY the ServiceId ---
var dynamicExecutionSettings = new PromptExecutionSettings() { ServiceId = "gpt-4o-mini" }; // Replace "gpt-4o" if needed
// --- Create Function ---
var storyFunction = kernel.CreateFunctionFromPrompt(
promptWithSettings,
functionName: functionName,
executionSettings: dynamicExecutionSettings, // Pass settings object with only ServiceId
templateFormat: "handlebars",
promptTemplateFactory: new HandlebarsPromptTemplateFactory()
);
// --- Invoke Function ---
Console.WriteLine($"Invoking function '{functionName}' with ServiceId '{dynamicExecutionSettings.ServiceId}'...");
Console.WriteLine($"Expected max_tokens: 10, temperature: 1.0 (from template)");
try
{
// Use a simple input
var result = await kernel.InvokeAsync(storyFunction);
Console.WriteLine("\n--- Function Result ---");
Console.WriteLine(result);
Console.WriteLine("----------------------");
// Observe the length and style of the output here.
// If max_tokens=10 was ignored, the output will likely be much longer.
// If temperature=1.0 was ignored, the output might be less creative than expected.
}
catch (Exception ex)
{
Console.WriteLine($"Error invoking function: {ex}");
}
- Invoke the function and observe the output.
Expected behavior
The function execution should use max_tokens: 10 and temperature: 1.0 as defined in the Handlebars template's execution_settings block because these were not overridden in the dynamicExecutionSettings object passed to CreateFunctionFromPrompt.
The expected mechanism is that settings provided during function creation/invocation should selectively override or merge with the template defaults. Providing only a ServiceId should use that service but retain other default parameters (max_tokens, temperature, etc.) from the template.
Actual behaviour
The max_tokens and temperature values defined in the template's execution_settings block are ignored when dynamicExecutionSettings (containing only ServiceId) is passed. The execution appears to use the AI service's default values for these parameters instead (e.g., the output is significantly longer than 10 tokens, suggesting max_tokens was ignored).
- Observation: If the executionSettings parameter is omitted entirely during the CreateFunctionFromPrompt call (i.e., executionSettings: null), the settings defined within the Handlebars template are correctly applied.
Platform
- OS: Windows 11
- Language: C# (.NET)
- Source: Microsoft.SemanticKernel v 1.46.0
- AI model: Depends on the ServiceId set in the promptExecutionSettings
Additional context
The primary goal is to define default execution parameters (like temperature, max_tokens, top_p) within the prompt template itself for maintainability and consistency across different uses of the prompt/different semantic kernel functions. However, we need the flexibility to dynamically select the specific AI model/service (via ServiceId) at runtime based on user choice or other logic.
This bug currently prevents this workflow, forcing us to either:
- Omit the executionSettings parameter during creation (losing the ability to set ServiceId dynamically)
- Pass all execution settings dynamically (losing the benefit of template-defined settings or forcing workarounds such as parsing the prompt).
This behavior most likely originates from this function from the KernelFunctionFromPrompt.cs
code:
/// <summary>
/// Creates a <see cref="KernelFunction"/> instance for a prompt specified via a prompt template.
/// </summary>
/// <param name="promptTemplate">Prompt template for the function, defined using the <see cref="PromptTemplateConfig.SemanticKernelTemplateFormat"/> template format.</param>
/// <param name="executionSettings">Default execution settings to use when invoking this prompt function.</param>
/// <param name="functionName">A name for the given function. The name can be referenced in templates and used by the pipeline planner.</param>
/// <param name="description">The description to use for the function.</param>
/// <param name="templateFormat">Optional format of the template. Must be provided if a prompt template factory is provided</param>
/// <param name="promptTemplateFactory">Optional: Prompt template factory</param>
/// <param name="loggerFactory">Logger factory</param>
/// <returns>A function ready to use</returns>
[RequiresUnreferencedCode("Uses reflection to handle various aspects of the function creation and invocation, making it incompatible with AOT scenarios.")]
[RequiresDynamicCode("Uses reflection to handle various aspects of the function creation and invocation, making it incompatible with AOT scenarios.")]
public static KernelFunction Create(
string promptTemplate,
Dictionary<string, PromptExecutionSettings>? executionSettings = null,
string? functionName = null,
string? description = null,
string? templateFormat = null,
IPromptTemplateFactory? promptTemplateFactory = null,
ILoggerFactory? loggerFactory = null)
{
Verify.NotNullOrWhiteSpace(promptTemplate);
if (promptTemplateFactory is not null)
{
if (string.IsNullOrWhiteSpace(templateFormat))
{
throw new ArgumentException($"Template format is required when providing a {nameof(promptTemplateFactory)}", nameof(templateFormat));
}
}
var promptConfig = new PromptTemplateConfig
{
TemplateFormat = templateFormat ?? PromptTemplateConfig.SemanticKernelTemplateFormat,
Name = functionName,
Description = description ?? "Generic function, unknown purpose",
Template = promptTemplate
};
if (executionSettings is not null)
{
promptConfig.ExecutionSettings = executionSettings;
}
var factory = promptTemplateFactory ?? new KernelPromptTemplateFactory(loggerFactory);
return Create(promptTemplate: factory.Create(promptConfig),
promptConfig: promptConfig,
loggerFactory: loggerFactory);
}
Metadata
Metadata
Assignees
Labels
Type
Projects
Status