Skip to content
This repository was archived by the owner on Feb 11, 2025. It is now read-only.
This repository was archived by the owner on Feb 11, 2025. It is now read-only.

stream Parameter Omission in Chat Completion Requests #46

@keyboardsamurai

Description

@keyboardsamurai

The DefaultOpenAiClient in the openai4j project does not preserve the stream parameter in chat completion requests, even when explicitly set to false. This omission causes compatibility issues with LLM providers that interpret the absence of a stream parameter as stream=true. This behavior results in unexpected streaming responses, which cannot be disabled.

Problem Details

In the DefaultOpenAiClient.chatCompletion() method, the request is modified, but the stream parameter is discarded, even if explicitly set to false. This makes it impossible to disable streaming for LLM providers that require "stream": false in the JSON payload.

Relevant Code Section:

@Override
public SyncOrAsyncOrStreaming<ChatCompletionResponse> chatCompletion(OpenAiClientContext context,
    ChatCompletionRequest request) {
    ChatCompletionRequest syncRequest = ChatCompletionRequest.builder().from(request).stream(null).build();

    return new RequestExecutor<>(
            openAiApi.chatCompletions(context.headers(), syncRequest, apiVersion),
            r -> r,
            okHttpClient,
            formatUrl("chat/completions"),
            () -> ChatCompletionRequest.builder().from(request).stream(true).build(),
            ChatCompletionResponse.class,
            r -> r,
            logStreamingResponses
    );
}

As shown above, the stream parameter is set to null in the syncRequest, causing it to be omitted in the final payload.

Reproduction Steps

  1. Create a ChatCompletionRequest object with .stream(false) explicitly set.
  2. Send the request through DefaultOpenAiClient.
  3. Observe the generated JSON request payload sent to the LLM provider.
  4. Notice that the stream parameter is completely absent from the JSON, leading to unintended streaming behavior.

Example Test Case:

@Test
void demonstrateStreamingForced() throws Exception {
    // Setup a mock server and client
    DefaultOpenAiClient client = DefaultOpenAiClient.builder()
            .baseUrl(mockWebServer.url("/").toString())
            .openAiApiKey("test-key")
            .build();

    // Create a ChatCompletionRequest with stream set to false
    ChatCompletionRequest request = ChatCompletionRequest.builder()
            .model("random-model")
            .addUserMessage("Hello")
            .stream(false)
            .build();

    client.chatCompletion(request).execute();

    // Capture and inspect the outgoing request
    RecordedRequest recordedRequest = mockWebServer.takeRequest();
    String requestBody = recordedRequest.getBody().readUtf8();

    // Verify the stream parameter
    JsonNode jsonNode = objectMapper.readTree(requestBody);
    assertTrue(jsonNode.has("stream"), "Request should contain stream parameter but isn't present");
}

Observed Behavior:

The stream parameter is omitted from the JSON payload entirely:

{
  "model": "random-model",
  "messages": [{"role":"user","content":"Hello"}]
}

Expected Behavior:

The stream parameter should be explicitly included in the payload when set to false:

{
  "model": "random-model",
  "messages": [{"role":"user","content":"Hello"}],
  "stream": false
}

Affected Scenarios

This issue is critical for:

  • Local LLM deployments that require explicit stream settings.
  • Custom OpenAI-compatible APIs that interpret missing parameters as stream=true.
  • Use cases where streaming must be explicitly disabled to prevent unintended behavior.

Workaround

Currently, the only solution involves intercepting and modifying requests manually or introducing proxy layers, which adds unnecessary complexity.

Proposed Solution

Update DefaultOpenAiClient to propagate the stream parameter as-is, preserving its original value from the request object. If stream is explicitly set to false, it should be reflected in the final JSON payload.


Environment Details:

  • LLM(s): OpenAI-compatible APIs on LM Studio
  • Java version: 21

Additional Context:

This behavior was originally reported in the LangChain4j repository (langchain4j/langchain4j#2182) and identified as an issue to be addressed in openai4j.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions