Skip to content

OpenAI Stream HTTP Adapter - A Go-based server providing OpenAI-compatible API endpoints with full HTTP streaming support. Bridges AI platforms with streaming services, converting SSE to OpenAI format. Perfect for Dify, LangChain, and intelligent agent integrations.

License

Notifications You must be signed in to change notification settings

gitsrc/OpenAI-Stream-HTTP-Adapter

Repository files navigation

OpenAI-Stream-HTTP-Adapter

An open-source Go-based server that provides OpenAI-compatible API endpoints with full HTTP streaming support. Acts as a bridge between intelligent agent platforms (like Dify) and various streaming services, converting Server-Sent Events (SSE) streams into OpenAI-compatible response formats.

Architecture

flowchart LR
    A[Intelligent Agent Platform] --> B[OpenAI-Stream-HTTP-Adapter<br/>:28081]
    B --> C[Downstream Stream Service<br/>:28080]
    
    subgraph B_Functions [Adapter Functions]
        B1[Converts SSE to OpenAI format]
        B2[Extracts message content]
        B3[Maintains streaming connection]
    end
    
    subgraph A_Interfaces [Platform Interfaces]
        A1[Standard OpenAI API interface]
        A2[HTTP streaming support]
        A3[LLM node integration]
    end
    
    B -.-> B_Functions
    A -.-> A_Interfaces
Loading

Components

1. Downstream Stream Service (mock-stream-server/)

  • Port: 28080
  • Endpoint: /stream
  • Provides raw SSE streaming with JSON data
  • Requires headers: Content-Type: application/json, Accept: text/event-stream
  • Example implementation for testing purposes

2. OpenAI-Stream-HTTP-Adapter (Root Directory)

  • Port: 28081
  • Endpoints:
    • GET/POST /v1/models - List available models
    • POST /v1/chat/completions - Chat completions with streaming support
  • Converts downstream SSE streams to OpenAI-compatible format
  • Extracts and processes message content from complex JSON payloads
  • Acts as intelligent proxy between agent platforms and streaming services

Setup Instructions

1. Start Mock Stream Server

cd mock-stream-server
go run main.go

2. Start OpenAI API Server

go run cmd/server/main.go

3. Test the Servers

Test mock server directly:

curl -H "Content-Type: application/json" -H "Accept: text/event-stream" http://localhost:28080/stream

Test OpenAI API server:

curl -X POST http://localhost:28081/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

Platform Configuration

To configure this with intelligent agent platforms (like Dify) as an OpenAI-type LLM service:

  1. Go to your platform's settings → Model Providers → OpenAI
  2. Add a new model provider with these settings:
API Base URL: http://localhost:28081/v1
API Key: (any value, not used by mock server)
Model Name: gpt-3.5-turbo
  1. Enable streaming support in your workflow

Features

  • OpenAI API Compatibility: Full support for standard OpenAI endpoints
  • HTTP Streaming: Real-time Server-Sent Events (SSE) support
  • Intelligent Proxy: Bridges agent platforms with downstream streaming services
  • Message Extraction: Automatically extracts content from complex JSON payloads
  • Platform Integration: Seamless integration as OpenAI-type LLM provider
  • Cross-Platform: Built with Go for high performance and easy deployment
  • Open Source: MIT licensed, free for personal and commercial use

Port Configuration

  • Downstream Stream Service: 0.0.0.0:28080
  • OpenAI-Stream-HTTP-Adapter: 0.0.0.0:28081

Development

The project follows a clean architecture pattern with the following structure:

OpenAI-Stream-HTTP-Adapter/
├── cmd/
│   └── server/
│       └── main.go          # Application entry point
├── config/
│   └── config.yaml          # Configuration file
├── internal/
│   ├── config/              # Configuration loading
│   ├── handlers/            # HTTP request handlers
│   ├── models/              # Data models
│   ├── services/            # Business logic services
│   └── utils/               # Utility functions
├── test/                    # Unit tests
└── mock-stream-server/      # Mock SSE server for testing

To modify the streaming behavior, edit:

  • mock-stream-server/main.go - Change the downstream stream service behavior
  • internal/services/stream_service.go - Modify streaming proxy logic
  • internal/utils/sse_converter.go - Modify SSE format conversion

Troubleshooting

  1. Port conflicts: Change ports in config/config.yaml file
  2. Connection refused: Ensure both servers are running
  3. Streaming not working: Check that stream: true is set in requests
  4. Platform integration: Verify API base URL includes /v1 suffix
  5. Configuration issues: The server will use default config if config file is missing or invalid

Build and Deployment

Using Makefile

# Build both applications
make build build-mock

# Run both services locally (using local config)
CONFIG_PATH=config/config.local.yaml make run

# Run mock server
make run-mock

# Run tests
make test

# Clean build artifacts
make clean

Using Docker

# Build and run main application
make docker-run

# Build and run mock server
make docker-run-mock

# Build and run both services with Docker Compose
make docker-compose-up

Using Docker Compose Directly

# Start both services
docker-compose up --build

# Start in background
docker-compose up --build -d

# Stop services
docker-compose down

Configuration

The application supports multiple configuration files:

  • config/config.yaml - Default configuration for Docker deployment
  • config/config.local.yaml - Local development configuration

You can specify a custom config file using the CONFIG_PATH environment variable:

CONFIG_PATH=config/custom-config.yaml go run cmd/server/main.go

Testing

Run unit tests:

go test ./test/...

Example Platform Configuration

When creating a workflow in intelligent agent platforms (like Dify):

  • Use "LLM" node type
  • Select "OpenAI" as provider
  • Set model to "gpt-3.5-turbo"
  • Enable streaming option
  • The node will automatically use the configured API server

About

OpenAI Stream HTTP Adapter - A Go-based server providing OpenAI-compatible API endpoints with full HTTP streaming support. Bridges AI platforms with streaming services, converting SSE to OpenAI format. Perfect for Dify, LangChain, and intelligent agent integrations.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published