Skip to content

LLM: Switch default to OpenAI + healthcheck + streaming endpoint #1

@BossyT

Description

@BossyT

Goals

  • Provide a thin OpenAI client that reads configuration from environment variables.
  • Expose a streaming chat endpoint at /api/ai/chat with raw text output using Edge runtime.
  • Expose a health endpoint at /api/health reporting status of the OpenAI service.
  • Update documentation with required environment variables and show example usage.

Acceptance Criteria

  • A new file lib/ai/openai.ts implements streamChat() using OPENAI_API_KEY, OPENAI_MODEL, OPENAI_BASE_URL and AI_PROVIDER=openai environment variables.
  • A new API route /api/ai/chat in the Edge runtime accepts chat messages { messages, model?, temperature? } and streams back plain text output. Missing AI_PROVIDER=openai returns an error.
  • A new API route /api/health returns JSON { status, services: { openai: { ok, model, error? }}}; it uses OPENAI_API_KEY to validate connectivity with OpenAI, and sets ok: false with a helpful error when the key is missing or invalid.
  • The README explains required environment variables (AI_PROVIDER, OPENAI_API_KEY, OPENAI_MODEL, OPENAI_BASE_URL) and includes simple instructions for streaming chat and health check.
  • The feature does not crash the app when secrets are missing; openai.ok is set to false with an error in /api/health.
  • Include screenshots of the new /api/health endpoint (showing both successful and missing-key responses) and a sample streamed chat in the PR description.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions