OpenAI 兼容的 LLM API 网关,通过连接 OpenCode HTTP Server 提供 OpenAI 格式的 API。
┌─────────────────────────────────────────┐
│ free-token │
│ (OpenAI 兼容 API 网关 :3000) │
└──────────────┬──────────────────────────┘
│ HTTP
▼
┌─────────────────────────────────────────┐
│ OpenCode HTTP Server │
│ (内置服务器 :4096) │
│ 自动管理认证和会话状态 │
└──────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ OpenCode Zen Models │
│ big-pickle, kimi-k2.5, minimax... │
└─────────────────────────────────────────┘
- ✅ OpenAI 兼容 API (
/v1/chat/completions,/v1/models) - ✅ 支持流式 (streaming) 和非流式响应
- ✅ 自动管理 OpenCode HTTP Server 生命周期
- ✅ 自动处理认证
- ✅ 跨平台支持 (Windows/Linux/macOS)
npm install# 使用启动脚本(自动管理 OpenCode Server)
./scripts/start.sh
# 或手动启动
npm run build
npm start# 健康检查
curl http://localhost:3000/health
# 列出可用模型
curl http://localhost:3000/v1/models
# 发送消息(非流式)
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"big-pickle","messages":[{"role":"user","content":"Hello!"}],"stream":false}'
# 发送消息(流式)
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"big-pickle","messages":[{"role":"user","content":"Hello!"}],"stream":true}'curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "big-pickle",
"messages": [
{"role": "user", "content": "Hello!"}
],
"stream": true
}'请求参数:
| 参数 | 类型 | 必填 | 说明 |
|---|---|---|---|
model |
string | 否 | 模型 ID,默认 big-pickle |
messages |
array | 是 | 消息数组 |
stream |
boolean | 否 | 是否开启流式,默认 false |
消息格式:
{
"messages": [
{"role": "user", "content": "你的问题"}
]
}curl http://localhost:3000/v1/models响应格式:
{
"object": "list",
"data": [
{
"id": "big-pickle",
"object": "model",
"created": 1234567890,
"owned_by": "opencode",
"name": "Big Pickle",
"provider": "opencode"
}
]
}curl http://localhost:3000/health响应:
{"status":"ok"}| 模型 ID | 名称 | 说明 |
|---|---|---|
big-pickle |
Big Pickle | OpenCode 默认模型 |
trinity-large-preview-free |
Trinity Large Preview | 最新预览模型 |
gpt-5-nano |
GPT-5 Nano | 轻量级 GPT-5 |
glm-4.7-free |
GLM-4.7 Free | 智谱 AI 免费模型 |
minimax-m2.1-free |
MiniMax M2.1 Free | MiniMax 免费模型 |
kimi-k2.5-free |
Kimi K2.5 Free | 月之暗面免费模型 |
在 ~/.continue/config.json 中添加:
{
"models": [
{
"title": "free-token",
"provider": "openai",
"model": "big-pickle",
"apiKey": "any-string",
"baseUrl": "http://localhost:3000/v1"
}
]
}export OPENAI_API_KEY="any-string"
export OPENAI_BASE_URL="http://localhost:3000/v1"在设置中添加:
API Provider: OpenAI
Base URL: http://localhost:3000/v1
API Key: any-string
Model: big-pickle
const response = await fetch('http://localhost:3000/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'big-pickle',
messages: [
{ role: 'user', content: 'Hello!' }
],
stream: true
})
});
for await (const chunk of response.body) {
console.log(chunk);
}| 变量 | 默认值 | 说明 |
|---|---|---|
PORT |
3000 |
free-token 服务端口 |
OPENCODE_SERVER_HOST |
127.0.0.1 |
OpenCode HTTP Server 地址 |
OPENCODE_SERVER_PORT |
4096 |
OpenCode HTTP Server 端口 |
OPENCODE_SERVER_USERNAME |
opencode |
认证用户名 |
OPENCODE_SERVER_PASSWORD |
(自动获取) | 认证密码 |
DEFAULT_MODEL |
big-pickle |
默认模型 |
REQUEST_TIMEOUT |
300000 |
请求超时 (毫秒) |
启动脚本会自动:
- 启动 OpenCode HTTP Server (如果未运行)
- 等待 OpenCode Server 就绪
- 启动 free-token
- 显示 API 端点信息
# Linux/macOS
./scripts/start.sh
./scripts/stop.sh
# Windows
scripts\start.bat
scripts\stop.bat# 启动 OpenCode Server
opencode serve --port 4096 --hostname 127.0.0.1
# 启动 free-token
npm start
# 或指定端口
PORT=3000 npm start# 安装依赖
npm install
# 开发模式(热重载)
npm run dev
# 构建
npm run build
# 运行测试
npm test# 使用 PM2
npm install -g pm2
pm2 start dist/index.js --name free-token
pm2 save
pm2 startup创建 /etc/systemd/system/free-token.service:
[Unit]
Description=free-token LLM Provider
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/path/to/free-token
ExecStart=/usr/bin/node /path/to/free-token/dist/index.js
Restart=always
RestartSec=10
Environment=PORT=3000
Environment=OPENCODE_SERVER_PORT=4096
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable free-token
sudo systemctl start free-tokenFROM node:18-alpine
WORKDIR /app
COPY dist ./dist
COPY package.json ./
RUN npm ci --only=production
EXPOSE 3000
CMD ["node", "dist/index.js"]# 检查端口占用
ss -tlnp | grep 3000
# 查看日志
tail -f logs/free-token.log# 检查 OpenCode 是否安装
which opencode
# 测试 OpenCode Server
curl http://127.0.0.1:4096/global/health# 验证模型列表
curl http://localhost:3000/v1/models
# 测试具体模型
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"big-pickle","messages":[{"role":"user","content":"test"}]}'free-token/
├── src/ # TypeScript 源代码
├── dist/ # 编译后的 JavaScript
├── scripts/ # 启动/停止脚本
├── docs/ # 文档
├── logs/ # 日志文件
├── .env.example # 环境变量示例
└── package.json
MIT