Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 42 additions & 5 deletions docs/contributing/http-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ tests/transports-integrations/tests/integrations/

HTTP integrations provide API-compatible endpoints that translate between external service formats (OpenAI, Anthropic, etc.) and Bifrost's unified request/response format. Each integration follows a standardized pattern using Bifrost's `GenericRouter` architecture.

**Key Feature**: All integrations should support **multi-provider model syntax** using `ParseModelString`, allowing users to access any provider through any SDK (e.g., `"anthropic/claude-3-sonnet"` via OpenAI SDK).

### **Integration Architecture Flow**

```mermaid
Expand Down Expand Up @@ -150,6 +152,7 @@ package your_integration

import (
"github.com/maximhq/bifrost/core/schemas"
"github.com/maximhq/bifrost/transports/bifrost-http/integrations"
)

// YourChatRequest represents the incoming request format
Expand Down Expand Up @@ -179,6 +182,11 @@ type YourChatResponse struct {

// ConvertToBifrostRequest converts your service format to Bifrost format
func (r *YourChatRequest) ConvertToBifrostRequest() *schemas.BifrostRequest {
// Enable multi-provider support with ParseModelString
// This allows users to specify "provider/model" (e.g., "anthropic/claude-3-sonnet")
// or just "model" (uses your integration's default provider)
provider, modelName := integrations.ParseModelString(r.Model, schemas.YourDefaultProvider)

// Convert messages
bifrostMessages := make([]schemas.ModelChatMessage, len(r.Messages))
for i, msg := range r.Messages {
Expand All @@ -195,7 +203,8 @@ func (r *YourChatRequest) ConvertToBifrostRequest() *schemas.BifrostRequest {
}

return &schemas.BifrostRequest{
Model: r.Model,
Model: modelName, // Clean model name without provider prefix
Provider: provider, // Extracted or default provider
MaxTokens: &r.MaxTokens,
Temperature: r.Temperature,
Input: schemas.BifrostInput{
Expand Down Expand Up @@ -532,6 +541,7 @@ import (
- [ ] **Type Definitions** - Implemented `types.go` with request/response types
- [ ] **Request Conversion** - Properly converts service format to Bifrost format
- [ ] **Response Conversion** - Properly converts Bifrost format to service format
- [ ] **Multi-Provider Support** - Uses `ParseModelString` to enable "provider/model" syntax
- [ ] **Error Handling** - Handles all error cases gracefully
- [ ] **Tool Support** - Supports function/tool calling if applicable
- [ ] **Multi-Modal Support** - Supports images/vision if applicable
Expand Down Expand Up @@ -566,20 +576,47 @@ import (

## 🔧 **Common Patterns**

### **Model Provider Detection**
### **Multi-Provider Model Support** (same as shown above in the types.go file example)

Use Bifrost's built-in provider detection:
Enable users to access multiple providers through your integration using `ParseModelString`:

```go
import "github.com/maximhq/bifrost/transports/bifrost-http/integrations"

// In request converter
// In request converter - enables "provider/model" syntax
func (r *YourChatRequest) ConvertToBifrostRequest() *schemas.BifrostRequest {
// ParseModelString handles both "provider/model" and "model" formats
// - "anthropic/claude-3-sonnet" → (schemas.Anthropic, "claude-3-sonnet")
// - "claude-3-sonnet" → (schemas.YourDefaultProvider, "claude-3-sonnet")
provider, modelName := integrations.ParseModelString(r.Model, schemas.YourDefaultProvider)

return &schemas.BifrostRequest{
Model: modelName, // Clean model name without provider prefix
Provider: provider, // Extracted or default provider
// ... rest of conversion
}
}
```

**Benefits for Users:**

- **OpenAI SDK**: `model: "anthropic/claude-3-sonnet"` routes to Anthropic
- **Anthropic SDK**: `model: "openai/gpt-4o"` routes to OpenAI
- **Your SDK**: `model: "vertex/gemini-pro"` routes to Google Vertex
- **Backward Compatible**: `model: "claude-3-sonnet"` uses your default provider

### **Alternative: Pattern-Based Detection**

For automatic provider detection without prefixes:

```go
// Legacy approach - still supported but less flexible
func (r *YourChatRequest) ConvertToBifrostRequest() *schemas.BifrostRequest {
provider := integrations.GetProviderFromModel(r.Model)

return &schemas.BifrostRequest{
Model: r.Model,
Provider: &provider,
Provider: provider,
// ... rest of conversion
}
}
Expand Down
45 changes: 45 additions & 0 deletions docs/contributing/provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -595,6 +595,51 @@ Before submitting your provider implementation:
- [ ] **Key Handling** - Proper API key requirement configuration
- [ ] **Configuration** - Standard provider configuration support

### **HTTP Transport Integration**

- [ ] **Provider Recognition** - Added to `validProviders` map in `transports/bifrost-http/integrations/utils.go`
- [ ] **Model Patterns** - Added patterns to appropriate `is*Model()` functions in utils.go
- [ ] **Transport Tests** - All tests pass in `tests/transports-integrations/` directory
- [ ] **Multi-Provider Support** - Verified `ParseModelString` correctly handles your provider prefix

**Required Updates in `utils.go`:**

```go
// 1. Add to validProviders map
var validProviders = map[schemas.ModelProvider]bool{
// ... existing providers
schemas.YourProvider: true, // Add this line
}

// 2. Add model patterns to appropriate function
func isYourProviderModel(model string) bool {
yourProviderPatterns := []string{
"your-provider-pattern", "your-model-prefix", "yourprovider/",
}
return matchesAnyPattern(model, yourProviderPatterns)
}

// 3. Add pattern check to GetProviderFromModel
func GetProviderFromModel(model string) schemas.ModelProvider {
// ... existing checks

// Your Provider Models
if isYourProviderModel(modelLower) {
return schemas.YourProvider
}

// ... rest of function
}
```

**Test Your Integration:**

```bash
# Run HTTP transport integration tests
cd tests/transports-integrations
python -m pytest tests/integrations/ -v
```

---

## 🚀 **Advanced Features**
Expand Down
95 changes: 94 additions & 1 deletion docs/usage/http-transport/integrations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ client = openai.OpenAI(

Your existing code gets these features automatically:

- **Multi-provider fallbacks** - Automatic failover between providers
- **Multi-provider fallbacks** - Automatic failover between multiple providers, regardless of the SDK you use
- **Load balancing** - Distribute requests across multiple API keys
- **Rate limiting** - Built-in request throttling and queuing
- **Tool integration** - MCP tools available in all requests
Expand Down Expand Up @@ -161,6 +161,99 @@ export ANTHROPIC_BASE_URL="http://bifrost:8080/anthropic"

---

## 🌐 Multi-Provider Usage

### **Provider-Prefixed Models**

Use multiple providers seamlessly by prefixing model names with the provider:

```python
import openai

# Single client, multiple providers
client = openai.OpenAI(
base_url="http://localhost:8080/openai",
api_key="dummy" # API keys configured in Bifrost
)

# OpenAI models
response1 = client.chat.completions.create(
model="gpt-4o-mini", # (default OpenAI since it's OpenAI's SDK)
messages=[{"role": "user", "content": "Hello!"}]
)

# Anthropic models using OpenAI SDK format
response2 = client.chat.completions.create(
model="anthropic/claude-3-sonnet-20240229",
messages=[{"role": "user", "content": "Hello!"}]
)

# Google Vertex models
response3 = client.chat.completions.create(
model="vertex/gemini-pro",
messages=[{"role": "user", "content": "Hello!"}]
)

# Azure OpenAI models
response4 = client.chat.completions.create(
model="azure/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)

# Local Ollama models
response5 = client.chat.completions.create(
model="ollama/llama3.1:8b",
messages=[{"role": "user", "content": "Hello!"}]
)
```

### **Provider-Specific Optimization**

```python
import openai

client = openai.OpenAI(
base_url="http://localhost:8080/openai",
api_key="dummy"
)

def choose_optimal_model(task_type: str, content: str):
"""Choose the best model based on task requirements"""

if task_type == "code":
# OpenAI excels at code generation
return "openai/gpt-4o-mini"

elif task_type == "creative":
# Anthropic is great for creative writing
return "anthropic/claude-3-sonnet-20240229"

elif task_type == "analysis" and len(content) > 10000:
# Anthropic has larger context windows
return "anthropic/claude-3-sonnet-20240229"

elif task_type == "multilingual":
# Google models excel at multilingual tasks
return "vertex/gemini-pro"

else:
# Default to fastest/cheapest
return "openai/gpt-4o-mini"

# Usage examples
code_response = client.chat.completions.create(
model=choose_optimal_model("code", ""),
messages=[{"role": "user", "content": "Write a Python web scraper"}]
)

creative_response = client.chat.completions.create(
model=choose_optimal_model("creative", ""),
messages=[{"role": "user", "content": "Write a short story about AI"}]
)
```

---

## 🚀 Deployment Scenarios

### **Microservices Architecture**
Expand Down
36 changes: 36 additions & 0 deletions docs/usage/http-transport/integrations/anthropic-compatible.md
Original file line number Diff line number Diff line change
Expand Up @@ -542,6 +542,42 @@ test_tool_use()

---

## 🌐 Multi-Provider Support

Use multiple providers with Anthropic SDK format by prefixing model names:

```python
import anthropic

client = anthropic.Anthropic(
base_url="http://localhost:8080/anthropic",
api_key="dummy" # API keys configured in Bifrost
)

# Anthropic models (default)
response1 = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=100,
messages=[{"role": "user", "content": "Hello!"}]
)

# OpenAI models via Anthropic SDK
response2 = client.messages.create(
model="openai/gpt-4o-mini",
max_tokens=100,
messages=[{"role": "user", "content": "Hello!"}]
)

# Vertex models via Anthropic SDK
response3 = client.messages.create(
model="vertex/gemini-pro",
max_tokens=100,
messages=[{"role": "user", "content": "Hello!"}]
)
```

---

## 🔧 Configuration

### **Bifrost Config for Anthropic**
Expand Down
27 changes: 27 additions & 0 deletions docs/usage/http-transport/integrations/genai-compatible.md
Original file line number Diff line number Diff line change
Expand Up @@ -493,6 +493,33 @@ test_function_calling()

---

## 🌐 Multi-Provider Support

Use multiple providers with Google GenAI SDK format by prefixing model names:

```python
import google.generativeai as genai

genai.configure(
api_key="dummy", # API keys configured in Bifrost
client_options={"api_endpoint": "http://localhost:8080/genai"}
)

# Google models (default)
model1 = genai.GenerativeModel('gemini-pro')
response1 = model1.generate_content("Hello!")

# OpenAI models via GenAI SDK
model2 = genai.GenerativeModel('openai/gpt-4o-mini')
response2 = model2.generate_content("Hello!")

# Anthropic models via GenAI SDK
model3 = genai.GenerativeModel('anthropic/claude-3-sonnet-20240229')
response3 = model3.generate_content("Hello!")
```

---

## 🔧 Configuration

### **Bifrost Config for Google GenAI**
Expand Down
33 changes: 33 additions & 0 deletions docs/usage/http-transport/integrations/openai-compatible.md
Original file line number Diff line number Diff line change
Expand Up @@ -467,6 +467,39 @@ benchmark_response_time(openai_client, "Direct OpenAI")

---

## 🌐 Multi-Provider Support

Use multiple providers with OpenAI SDK format by prefixing model names:

```python
import openai

client = openai.OpenAI(
base_url="http://localhost:8080/openai",
api_key="dummy" # API keys configured in Bifrost
)

# OpenAI models (default)
response1 = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)

# Anthropic models via OpenAI SDK
response2 = client.chat.completions.create(
model="anthropic/claude-3-sonnet-20240229",
messages=[{"role": "user", "content": "Hello!"}]
)

# Vertex models via OpenAI SDK
response3 = client.chat.completions.create(
model="vertex/gemini-pro",
messages=[{"role": "user", "content": "Hello!"}]
)
```

---

## 🔧 Configuration

### **Bifrost Config for OpenAI**
Expand Down
7 changes: 5 additions & 2 deletions transports/bifrost-http/integrations/anthropic/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (

bifrost "github.com/maximhq/bifrost/core"
"github.com/maximhq/bifrost/core/schemas"
"github.com/maximhq/bifrost/transports/bifrost-http/integrations"
)

var fnTypePtr = bifrost.Ptr(string(schemas.ToolChoiceTypeFunction))
Expand Down Expand Up @@ -133,9 +134,11 @@ func (mc *AnthropicContent) UnmarshalJSON(data []byte) error {

// ConvertToBifrostRequest converts an Anthropic messages request to Bifrost format
func (r *AnthropicMessageRequest) ConvertToBifrostRequest() *schemas.BifrostRequest {
provider, model := integrations.ParseModelString(r.Model, schemas.Anthropic)

bifrostReq := &schemas.BifrostRequest{
Provider: schemas.Anthropic,
Model: r.Model,
Provider: provider,
Model: model,
}

messages := []schemas.BifrostMessage{}
Expand Down
Loading