Skip to content

.Net: Bug: GeminiChatCompletionClient does not invoke IAutoFunctionInvocationFilter during auto function calling sequence #12998

@sadikulsayed

Description

@sadikulsayed

Describe the bug
In the current implementation of GeminiChatCompletionClient (source link), the connector only invokes the IFunctionInvocationFilter after the Gemini LLM response, and does not invoke the IAutoFunctionInvocationFilter interface as part of the auto function calling loop. This behavior diverges from the expected filter pipeline, especially as documented for OpenAI and general Semantic Kernel auto invocation workflows.

As a result:

There is no filter pipeline for managing the entire auto function invocation loop.
Patterns requiring orchestration over multiple tool calls (e.g., early termination, batch function execution, multi-step planning) cannot be implemented for Gemini.
Existing workflow code relying on IAutoFunctionInvocationFilter is ignored when using Gemini.

To Reproduce

  1. Create a Semantic Kernel setup with Gemini as the LLM.
  2. Register an implementation of IAutoFunctionInvocationFilter with the kernel.
  3. Trigger an LLM response from Gemini that includes multiple tool calls or requires auto invocation.
  4. Observe that the auto function invocation filter is never executed, while the standard function invocation filter (IFunctionInvocationFilter) is.

Expected behavior
The connector should invoke IAutoFunctionInvocationFilter in auto function calling loops, just as it does for other LLM connectors.
This allows developers to intercept, control, and orchestrate multi-tool workflows, including terminating the loop early or applying logic across all planned function calls.

Actual Behavior
IAutoFunctionInvocationFilter is never invoked during tool auto function calling when using the Gemini connector, making it impossible to orchestrate, batch, or terminate the multi-step tool planning/execution loop.

However, the Gemini connector does invoke IFunctionInvocationFilter, but only after each tool function requested by the LLM is about to be executed in C# (i.e., post-model response, per-individual-function basis).

As a result, developers can only perform per-function interception and logging, not sequence/planning-wide control, within Gemini's auto function invocation execution.

Platform

  • Language: C#
  • Source: Microsoft.SemanticKernel.Connectors.Google 1.61.0-alpha

Metadata

Metadata

Assignees

Labels

.NETIssue or Pull requests regarding .NET codebugSomething isn't working

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions