Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Oct 22, 2025

Problem

When using the EquivalenceEvaluator with Azure OpenAI models, users encounter a 400 Bad Request error:

Azure.RequestFailedException : Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 16, but got 5 instead.
Status: 400 (Bad Request)
ErrorCode: integer_below_min_value

The EquivalenceEvaluator was configured with MaxOutputTokens = 5, which is below Azure OpenAI's minimum requirement of 16 tokens for the max_completion_tokens parameter.

Solution

Updated MaxOutputTokens from 5 to 16 in the EquivalenceEvaluator class to comply with Azure OpenAI's API requirements.

Changed file:

  • src/Libraries/Microsoft.Extensions.AI.Evaluation.Quality/EquivalenceEvaluator.cs

Change details:

// Before
MaxOutputTokens = 5, // See https://github.com/dotnet/extensions/issues/6814.

// After
MaxOutputTokens = 16, // Azure OpenAI requires minimum of 16 tokens. See https://github.com/dotnet/extensions/issues/6814.

Impact

  • Fixes compatibility with Azure OpenAI models (gpt-4o-mini, gpt-4o, etc.)
  • No functional impact on evaluation quality - the evaluator expects a simple 1-5 score response, and 16 tokens is more than sufficient
  • All other evaluators in the library already use higher values (800+), so this change aligns with the existing pattern

Fixes #[issue_number]

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • securitytools.pkgs.visualstudio.com

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

This section details on the original issue you should resolve

<issue_title>[AI Evaluation] EquivalenceEvaluator uses invalid MaxOutputTokens for Azure OpenAI model</issue_title>
<issue_description>### Description

When attempting to use the EquivalenceEvaluator, my tests throw an exception from Azure:

Azure.RequestFailedException : Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 16, but got 5 instead.
Status: 400 (Bad Request)
ErrorCode: integer_below_min_value

Content: 
{ 
    "error": { 
        "message": "Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 16, but got 5 instead.",
        "type": "invalid_request_error", 
        "param": "max_completion_tokens", 
        "code": "integer_below_min_value" 
    }
}

The limit has been increased to 5 in a PR to fix another issue.

Reproduction Steps

var messages = new List<ChatMessage>
{
    new(ChatRole.System, "Your are a helpful assistant."),
    new(ChatRole.Assistant, "What's the 3rd planet from the sun?")
};

var response = new ChatResponse(new ChatMessage(ChatRole.Assistant, "The Earth is the 3rd planet."));

var chatConfig = new ChatConfiguration(chatClient);

var equivalenceEvaluatorContext = new EquivalenceEvaluatorContext("The 3rd planet from the sun is the Earth.");

var equivalenceEvaluator = new EquivalenceEvaluator();
var evaluationResult = await equivalenceEvaluator.EvaluateAsync(messages, response, chatConfig, additionalContext: [equivalenceEvaluatorContext]);

Debug.WriteLine(evaluationResult.Metrics.Single().Value.Diagnostics?.Single().Message);

Expected behavior

The evaluator should complete without any exceptions.

Actual behavior

Azure returns a bad request with the following message: Invalid 'max_completion_tokens': integer below minimum value. Expected a value >= 16, but got 5 instead.

Regression?

No response

Known Workarounds

No response

Configuration

Using dotnet 10.0.100-rc.2.25502.107. I am getting the IChatClient from ChatClientAgent.ChatClient with the Microsoft.Agents.AI nuget package. I am using a deployed gpt-4o-mini model in Azure AI Foundry.

Other information

No response</issue_description>

<agent_instructions>Please bump the minimum from 5 to 16</agent_instructions>

Comments on the Issue (you are @copilot in this section)

Fixes #6945


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Microsoft Reviewers: Open in CodeFlow

… requirement

Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
@stephentoub stephentoub marked this pull request as ready for review October 22, 2025 13:26
@stephentoub stephentoub requested a review from a team as a code owner October 22, 2025 13:26
Copilot AI review requested due to automatic review settings October 22, 2025 13:26
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes a validation error in the EquivalenceEvaluator by updating the MaxOutputTokens value from 5 to 16 to comply with Azure OpenAI's minimum token requirement. This prevents a 400 Bad Request error when using the evaluator with Azure OpenAI models.

Key changes:

  • Updated MaxOutputTokens from 5 to 16 in EquivalenceEvaluator
  • Updated the comment to document the Azure OpenAI minimum requirement

Copilot AI and others added 2 commits October 22, 2025 13:28
Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix invalid MaxOutputTokens for EquivalenceEvaluator Fix EquivalenceEvaluator MaxOutputTokens to meet Azure OpenAI minimum requirement Oct 22, 2025
Copilot AI requested a review from stephentoub October 22, 2025 13:34
Copilot finished work on behalf of stephentoub October 22, 2025 13:34
@stephentoub stephentoub enabled auto-merge (squash) October 22, 2025 13:35
This was referenced Nov 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[AI Evaluation] EquivalenceEvaluator uses invalid MaxOutputTokens for Azure OpenAI model

3 participants