A lightweight Bun + Express template that connects to the Testune AI API and streams chat responses in real time using Server-Sent Events (SSE).
Use this project as a starting point for building AI apps, proxies, or backend services that need live, token-by-token responses from an AI model.
- ⚡ Built with Bun + Express
- 🔌 Connects to the Testune AI API for LLM interactions
- 📡 Supports SSE streaming (just like OpenAI’s streaming responses)
- 💬 Example
/chat
endpoint you can call from your frontend - 🛠 Easy to extend with your own routes, auth, or business logic
.
├── src/
│ └── index.ts # Express server with streaming proxy
├── package.json
├── tsconfig.json
└── README.md