Skip to content

Conversation

@BossyT
Copy link
Owner

@BossyT BossyT commented Aug 16, 2025

This PR implements Issue #1 (LLM: Switch default to OpenAI + healthcheck + streaming endpoint).

Key changes

  • OpenAI client: Added lib/ai/openai.ts, a helper to call the OpenAI Chat Completions API with streaming, reading AI_PROVIDER, OPENAI_API_KEY, OPENAI_MODEL, and OPENAI_BASE_URL from environment.
  • Chat endpoint: Added Edge Runtime route at app/api/ai/chat/route.ts which validates input, ensures AI_PROVIDER is set to openai, and streams back raw assistant output via SSE parsing.
  • Health endpoint: Added Edge Runtime route at app/api/health/route.ts which checks for a valid OpenAI API key, queries the OpenAI /models endpoint, and returns { status, services: { openai: { ok, model, error? }}} without crashing when keys are missing.
  • README update: Added an AI Provider configuration section documenting the new environment variables and clarifying optional providers.

These changes allow the app to default to OpenAI, stream chat responses to clients, and expose a healthcheck for monitoring. If OPENAI_API_KEY is missing or invalid, the health endpoint will return ok: false with an error message instead of throwing.

Closes #1

BossyT added 4 commits August 15, 2025 22:45
Add minimal OpenAI client to lib/ai/openai.ts which reads environment variables and provides a streamChat function for streaming chat completions.
Implement POST /api/ai/chat with edge runtime for streaming OpenAI chat completions. Validates messages and provider, calls streamChat and returns SSE as raw text.
Adds a new Edge runtime GET endpoint at `/api/health` which performs a health check for the OpenAI provider. It reads `AI_PROVIDER`, `OPENAI_API_KEY`, `OPENAI_MODEL`, and `OPENAI_BASE_URL` from the environment, pings the OpenAI `/models` endpoint to determine service status, and returns a JSON payload: `{ status, services: { openai: { ok, model, error? }}}`. Missing keys or API failures are reported gracefully without crashing the app.
Adds an "AI Provider configuration" section to the README. This section documents the default OpenAI provider (`AI_PROVIDER=openai`) and lists the required environment variables: `OPENAI_API_KEY`, `OPENAI_MODEL` (default `gpt-4o-mini`), and `OPENAI_BASE_URL` (defaults to `https://api.openai.com/v1`). It also groups alternative provider keys (Anthropic, Gemini, Groq) under an optional section.
@vercel
Copy link

vercel bot commented Aug 16, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
starstack Error Error Aug 16, 2025 5:58am
starstack-new Ready Ready Preview Comment Aug 16, 2025 5:58am

@BossyT BossyT merged commit 28a26c1 into main Aug 16, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LLM: Switch default to OpenAI + healthcheck + streaming endpoint

2 participants