An open-source Go-based server that provides OpenAI-compatible API endpoints with full HTTP streaming support. Acts as a bridge between intelligent agent platforms (like Dify) and various streaming services, converting Server-Sent Events (SSE) streams into OpenAI-compatible response formats.
flowchart LR
A[Intelligent Agent Platform] --> B[OpenAI-Stream-HTTP-Adapter<br/>:28081]
B --> C[Downstream Stream Service<br/>:28080]
subgraph B_Functions [Adapter Functions]
B1[Converts SSE to OpenAI format]
B2[Extracts message content]
B3[Maintains streaming connection]
end
subgraph A_Interfaces [Platform Interfaces]
A1[Standard OpenAI API interface]
A2[HTTP streaming support]
A3[LLM node integration]
end
B -.-> B_Functions
A -.-> A_Interfaces
- Port: 28080
- Endpoint:
/stream
- Provides raw SSE streaming with JSON data
- Requires headers:
Content-Type: application/json
,Accept: text/event-stream
- Example implementation for testing purposes
- Port: 28081
- Endpoints:
GET/POST /v1/models
- List available modelsPOST /v1/chat/completions
- Chat completions with streaming support
- Converts downstream SSE streams to OpenAI-compatible format
- Extracts and processes message content from complex JSON payloads
- Acts as intelligent proxy between agent platforms and streaming services
cd mock-stream-server
go run main.go
go run cmd/server/main.go
Test mock server directly:
curl -H "Content-Type: application/json" -H "Accept: text/event-stream" http://localhost:28080/stream
Test OpenAI API server:
curl -X POST http://localhost:28081/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
To configure this with intelligent agent platforms (like Dify) as an OpenAI-type LLM service:
- Go to your platform's settings → Model Providers → OpenAI
- Add a new model provider with these settings:
API Base URL: http://localhost:28081/v1
API Key: (any value, not used by mock server)
Model Name: gpt-3.5-turbo
- Enable streaming support in your workflow
- ✅ OpenAI API Compatibility: Full support for standard OpenAI endpoints
- ✅ HTTP Streaming: Real-time Server-Sent Events (SSE) support
- ✅ Intelligent Proxy: Bridges agent platforms with downstream streaming services
- ✅ Message Extraction: Automatically extracts content from complex JSON payloads
- ✅ Platform Integration: Seamless integration as OpenAI-type LLM provider
- ✅ Cross-Platform: Built with Go for high performance and easy deployment
- ✅ Open Source: MIT licensed, free for personal and commercial use
- Downstream Stream Service:
0.0.0.0:28080
- OpenAI-Stream-HTTP-Adapter:
0.0.0.0:28081
The project follows a clean architecture pattern with the following structure:
OpenAI-Stream-HTTP-Adapter/
├── cmd/
│ └── server/
│ └── main.go # Application entry point
├── config/
│ └── config.yaml # Configuration file
├── internal/
│ ├── config/ # Configuration loading
│ ├── handlers/ # HTTP request handlers
│ ├── models/ # Data models
│ ├── services/ # Business logic services
│ └── utils/ # Utility functions
├── test/ # Unit tests
└── mock-stream-server/ # Mock SSE server for testing
To modify the streaming behavior, edit:
mock-stream-server/main.go
- Change the downstream stream service behaviorinternal/services/stream_service.go
- Modify streaming proxy logicinternal/utils/sse_converter.go
- Modify SSE format conversion
- Port conflicts: Change ports in
config/config.yaml
file - Connection refused: Ensure both servers are running
- Streaming not working: Check that
stream: true
is set in requests - Platform integration: Verify API base URL includes
/v1
suffix - Configuration issues: The server will use default config if config file is missing or invalid
# Build both applications
make build build-mock
# Run both services locally (using local config)
CONFIG_PATH=config/config.local.yaml make run
# Run mock server
make run-mock
# Run tests
make test
# Clean build artifacts
make clean
# Build and run main application
make docker-run
# Build and run mock server
make docker-run-mock
# Build and run both services with Docker Compose
make docker-compose-up
# Start both services
docker-compose up --build
# Start in background
docker-compose up --build -d
# Stop services
docker-compose down
The application supports multiple configuration files:
config/config.yaml
- Default configuration for Docker deploymentconfig/config.local.yaml
- Local development configuration
You can specify a custom config file using the CONFIG_PATH
environment variable:
CONFIG_PATH=config/custom-config.yaml go run cmd/server/main.go
Run unit tests:
go test ./test/...
When creating a workflow in intelligent agent platforms (like Dify):
- Use "LLM" node type
- Select "OpenAI" as provider
- Set model to "gpt-3.5-turbo"
- Enable streaming option
- The node will automatically use the configured API server