Closed
Description
Description
Hello everyone, I have a NextJs project that has this code in the API:
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo-0125',
stream: true,
temperature: 0.7,
max_tokens: 4000,
messages: [
{
role: 'system',
content: systemMessage,
},
{
role: 'user',
content: humanMessage,
},
],
});
const stream = OpenAIStream(response)
return new StreamingTextResponse(
stream,
{
headers: {
resultDbId,
outputDbId,
},
},
searchResult
);
The text generated by the stream is output on the screen as soon as it comes from the API, on real time. I notice that for small texts it works perfectly, as the frontend receives almost one character at a time from the API. But for long texts, it receives lines of texts instead of just one character at a time, and I don't want this effect. How do I fix this?
Code example
No response
Additional context
No response