Skip to content

refactor: use OpenAI SDK for streaming LLM calls#130

Merged
solderzzc merged 1 commit intodevelopfrom
feat/streaming-benchmark-env-vars
Mar 7, 2026
Merged

refactor: use OpenAI SDK for streaming LLM calls#130
solderzzc merged 1 commit intodevelopfrom
feat/streaming-benchmark-env-vars

Conversation

@solderzzc
Copy link
Member

  • Replace raw fetch + manual SSE parsing with OpenAI SDK streaming
  • Create llmClient/vlmClient instances with proper baseURL resolution
  • SDK handles max_completion_tokens, auth, and SSE parsing automatically
  • Add openai npm dependency to skill package.json
  • Healthcheck uses SDK non-streaming call

- Replace raw fetch + manual SSE parsing with OpenAI SDK streaming
- Create llmClient/vlmClient instances with proper baseURL resolution
- SDK handles max_completion_tokens, auth, and SSE parsing automatically
- Add openai npm dependency to skill package.json
- Healthcheck uses SDK non-streaming call
@solderzzc solderzzc merged commit 28c68a7 into develop Mar 7, 2026
1 check passed
@solderzzc solderzzc deleted the feat/streaming-benchmark-env-vars branch March 7, 2026 16:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant