Skip to content

Commit 1eb1619

Browse files
feat: transport flags simplified and logs updated
1 parent cf8d144 commit 1eb1619

File tree

14 files changed

+533
-303
lines changed

14 files changed

+533
-303
lines changed

README.md

Lines changed: 42 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -26,43 +26,38 @@ Bifrost is a high-performance AI gateway that connects you to 8+ providers (Open
2626

2727
📖 For detailed setup guides with multiple providers, advanced configuration, and language examples, see [Quick Start Documentation](./docs/quickstart/README.md)
2828

29-
**Step 1:** Create your config (copy & paste this)
30-
31-
```json
32-
{
33-
"providers": {
34-
"openai": {
35-
"keys": [
36-
{
37-
"value": "env.OPENAI_API_KEY",
38-
"models": ["gpt-4o-mini"],
39-
"weight": 1.0
40-
}
41-
]
42-
}
43-
}
44-
}
29+
**Step 1:** Start Bifrost (choose one)
30+
31+
```bash
32+
# 🐳 Docker (easiest - zero config needed!)
33+
docker pull maximhq/bifrost
34+
docker run -p 8080:8080 maximhq/bifrost
35+
36+
# 🔧 Or install Go binary (Make sure Go is in your PATH)
37+
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
38+
bifrost-http -port 8080
4539
```
4640

47-
**Step 2:** Add your API key
41+
**Step 2:** Open the built-in web interface
4842

4943
```bash
50-
export OPENAI_API_KEY=your_openai_api_key
44+
# 🖥️ Configure visually - no config files needed!
45+
open http://localhost:8080
5146
```
5247

53-
**Step 3:** Start Bifrost (choose one)
48+
**Step 3:** Add your provider via the web UI or API
5449

5550
```bash
56-
# 🐳 Docker
57-
docker pull maximhq/bifrost
58-
docker run -p 8080:8080 \
59-
-v $(pwd)/config.json:/app/config/config.json \
60-
-e OPENAI_API_KEY \
61-
maximhq/bifrost
51+
# Via Web UI: Just click "Add Provider" and enter your OpenAI API key
52+
# Or via API:
53+
curl -X POST http://localhost:8080/providers \
54+
-H "Content-Type: application/json" \
55+
-d '{
56+
"provider": "openai",
57+
"keys": [{"value": "env.OPENAI_API_KEY", "models": ["gpt-4o-mini"], "weight": 1.0}]
58+
}'
6259

63-
# 🔧 Or install Go binary (Make sure Go is in your PATH)
64-
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
65-
bifrost-http -config config.json -port 8080
60+
# Make sure to set the environment variable OPENAI_API_KEY in bifrost's session, or pass it as a flag in Docker (docker run -e OPENAI_API_KEY maximhq/bifrost).
6661
```
6762

6863
**Step 4:** Test it works
@@ -81,11 +76,12 @@ curl -X POST http://localhost:8080/v1/chat/completions \
8176

8277
**🎉 Boom! You're done!**
8378

84-
Your AI gateway is now running and ready for production. You can:
79+
Your AI gateway is now running with a beautiful web interface. You can:
8580

86-
- Add more providers for automatic failover
87-
- Scale to thousands of requests per second
88-
- Drop this into existing OpenAI/Anthropic code with zero changes
81+
- **🖥️ Configure everything visually** - No more JSON files!
82+
- **📊 Monitor requests in real-time** - See logs, analytics, and metrics
83+
- **🔄 Add providers and MCP clients on-the-fly** - Scale and failover without restarts
84+
- **🚀 Drop into existing code** - Zero changes to your OpenAI/Anthropic apps
8985

9086
> **Want more?** See our [Complete Setup Guide](./docs/quickstart/http-transport.md) for multi-provider configuration, failover strategies, and production deployment.
9187
@@ -112,18 +108,20 @@ Your AI gateway is now running and ready for production. You can:
112108

113109
## ✨ Features
114110

115-
- **Multi-Provider Support**: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
116-
- **Fallback Mechanisms**: Automatically retry failed requests with alternative models or providers
117-
- **Dynamic Key Management**: Rotate and manage API keys efficiently with weighted distribution
118-
- **Connection Pooling**: Optimize network resources for better performance
119-
- **Concurrency Control**: Manage rate limits and parallel requests effectively
120-
- **Flexible Transports**: Multiple transports for easy integration into your infra
121-
- **Plugin First Architecture**: No callback hell, simple addition/creation of custom plugins
122-
- **MCP Integration**: Built-in Model Context Protocol (MCP) support for external tool integration and execution
123-
- **Custom Configuration**: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
124-
- **Built-in Observability**: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
125-
- **SDK Support**: Bifrost is available as a Go package, so you can use it directly in your own applications.
126-
- **Seamless Integration with Generative AI SDKs**: Effortlessly transition to Bifrost by simply updating the `base_url` in your existing SDKs, such as OpenAI, Anthropic, GenAI, and more. Just one line of code is all it takes to make the switch.
111+
- **🖥️ Built-in Web UI**: Visual configuration, real-time monitoring, and analytics dashboard - no config files needed
112+
- **🚀 Zero Configuration Startup**: Start immediately, add providers dynamically via web interface or API
113+
- **🔄 Multi-Provider Support**: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
114+
- **🛡️ Fallback Mechanisms**: Automatically retry failed requests with alternative models or providers
115+
- **🔑 Dynamic Key Management**: Rotate and manage API keys efficiently with weighted distribution
116+
- **⚡ Connection Pooling**: Optimize network resources for better performance
117+
- **🎯 Concurrency Control**: Manage rate limits and parallel requests effectively
118+
- **🔌 Flexible Transports**: Multiple transports for easy integration into your infra
119+
- **🏗️ Plugin First Architecture**: No callback hell, simple addition/creation of custom plugins
120+
- **🛠️ MCP Integration**: Built-in Model Context Protocol (MCP) support for external tool integration and execution
121+
- **⚙️ Custom Configuration**: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
122+
- **📊 Built-in Observability**: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
123+
- **🔧 SDK Support**: Bifrost is available as a Go package, so you can use it directly in your own applications
124+
- **🔄 Seamless Integration with Generative AI SDKs**: Effortlessly transition to Bifrost by simply updating the `base_url` in your existing SDKs, such as OpenAI, Anthropic, GenAI, and more. Just one line of code is all it takes to make the switch
127125

128126
---
129127

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Choose your preferred way to use Bifrost:
1111
| **🔧 Go Package** | Direct integration, maximum control | 2 minutes | [📖 Go Package Guide](quickstart/go-package.md) |
1212
| **🌐 HTTP Transport** | Language-agnostic, microservices | 30 seconds | [📖 HTTP Transport Guide](quickstart/http-transport.md) |
1313

14-
**New to Bifrost?** Start with [⚡ Quick Start](quickstart/) to get running in under 30 seconds.
14+
**New to Bifrost?** Start with [⚡ Quick Start](quickstart/) to get running with zero configuration in under 30 seconds.
1515

1616
---
1717

docs/contributing/README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,17 @@ Ready to contribute? Here's your fastest path to making an impact:
1313
```bash
1414
# 1. Fork and clone
1515
git clone https://github.com/YOUR_USERNAME/bifrost.git
16-
cd bifrost
16+
cd bifrost/core # or bifrost/transports
1717

1818
# 2. Install dependencies
1919
go mod download
2020

2121
# 3. Verify setup
22-
go test ./core/...
23-
cd transports && go build -o bifrost-http
22+
cd ../tests/core-providers/
23+
go test -run TestOpenAI # or any provider you want to test
24+
25+
cd ../transports-integrations/
26+
# read the README.md file in the transports-integrations directory for testing instructions
2427

2528
# 4. You're ready! 🎉
2629
```

docs/quickstart/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,14 @@ Get up and running with Bifrost in under 30 seconds. Choose your preferred integ
2525

2626
## 🌐 **HTTP Transport** - Choose if you:
2727

28+
- ✅ Want a clean UI for configuration and monitoring
2829
- ✅ Use any programming language (Python, Node.js, etc.)
2930
- ✅ Want to keep AI logic separate from your application
3031
- ✅ Need a centralized AI gateway for multiple services
3132
- ✅ Prefer REST API integration patterns
3233
- ✅ Want drop-in compatibility with existing provider SDKs
34+
- ✅ Want to **add providers on-the-fly** without restarts
35+
- ✅ Want to **add MCP clients on-the-fly** without restarts
3336

3437
**[Start with HTTP Transport](http-transport.md)**
3538

docs/quickstart/http-transport.md

Lines changed: 124 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,37 @@
11
# 🌐 HTTP Transport Quick Start
22

3-
Get Bifrost running as an HTTP API in 30 seconds using Docker. Perfect for any programming language.
3+
Get Bifrost running as an HTTP API in **15 seconds** with **zero configuration**! Perfect for any programming language.
44

5-
## ⚡ 30-Second Setup
5+
## 🚀 Zero-Config Setup (15 seconds!)
66

7-
### 1. Create `config.json`
7+
### 1. Start Bifrost (No config needed!)
88

9-
This file should contain your provider settings and API keys.
9+
```bash
10+
# 🐳 Docker (fastest)
11+
docker pull maximhq/bifrost
12+
docker run -p 8080:8080 maximhq/bifrost
13+
14+
# 🔧 OR Go Binary (Make sure Go is in your PATH)
15+
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
16+
bifrost-http -port 8080
17+
```
18+
19+
### 2. Open the Web Interface
20+
21+
```bash
22+
# 🖥️ Beautiful web UI for zero-config setup
23+
open http://localhost:8080
24+
```
25+
26+
**🎉 That's it!** Configure providers visually, monitor requests in real-time, and get analytics - all through the web interface!
27+
28+
---
29+
30+
## 📂 File-Based Configuration (Optional)
31+
32+
Want to use a config file instead? Bifrost automatically looks for `config.json` in your app directory:
33+
34+
### 1. Create `config.json` in your app directory
1035

1136
```json
1237
{
@@ -24,32 +49,77 @@ This file should contain your provider settings and API keys.
2449
}
2550
```
2651

27-
### 2. Set Up Your Environment
28-
29-
Add your environment variable to the session.
52+
### 2. Set environment variables and start
3053

3154
```bash
3255
export OPENAI_API_KEY="your-openai-api-key"
56+
57+
# Docker with volume mount for persistence
58+
docker run -p 8080:8080 \
59+
-v $(pwd):/app/data \
60+
-e OPENAI_API_KEY \
61+
maximhq/bifrost
62+
63+
# OR Go Binary with app directory
64+
bifrost-http -app-dir . -port 8080
3365
```
3466

35-
### 3. Start the Bifrost HTTP Server
67+
---
68+
69+
## 📁 Understanding App Directory & Docker Volumes
3670

37-
You can run using Docker or Go binary.
71+
### **How the `-app-dir` Flag Works**
72+
73+
The `-app-dir` flag tells Bifrost where to store and look for data:
3874

3975
```bash
40-
# Docker
41-
docker pull maximhq/bifrost
42-
docker run -p 8080:8080 \
43-
-v $(pwd)/config.json:/app/config/config.json \
44-
-e OPENAI_API_KEY \
45-
maximhq/bifrost
76+
# Use current directory as app directory
77+
bifrost-http -app-dir .
4678

47-
# OR Go Binary (Make sure Go in your PATH)
48-
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
49-
bifrost-http -config config.json -port 8080
79+
# Use specific directory as app directory
80+
bifrost-http -app-dir /path/to/bifrost-data
81+
82+
# Default: current directory if no flag specified
83+
bifrost-http -port 8080
5084
```
5185

52-
### 4. Test the API
86+
**What Bifrost stores in the app directory:**
87+
88+
- `config.json` - Configuration file (if using file-based config)
89+
- `logs/` - Database logs and request history
90+
- Any other persistent data
91+
92+
### **How Docker Volumes Work with App Directory**
93+
94+
Docker volumes map your host directory to Bifrost's app directory:
95+
96+
```bash
97+
# Map current host directory → /app/data inside container
98+
docker run -p 8080:8080 -v $(pwd):/app/data maximhq/bifrost
99+
100+
# Map specific host directory → /app/data inside container
101+
docker run -p 8080:8080 -v /host/path/bifrost-data:/app/data maximhq/bifrost
102+
103+
# No volume = ephemeral storage (lost when container stops)
104+
docker run -p 8080:8080 maximhq/bifrost
105+
```
106+
107+
### **Persistence Scenarios**
108+
109+
| Scenario | Command | Result |
110+
| ---------------------------- | ------------------------------------------------------------- | --------------------------------------- |
111+
| **Ephemeral (testing)** | `docker run -p 8080:8080 maximhq/bifrost` | No persistence, configure via web UI |
112+
| **Persistent (recommended)** | `docker run -p 8080:8080 -v $(pwd):/app/data maximhq/bifrost` | Saves config & logs to host directory |
113+
| **Pre-configured** | Create `config.json`, then run with volume | Starts with your existing configuration |
114+
115+
### **Best Practices**
116+
117+
- **🔧 Development**: Use `-v $(pwd):/app/data` to persist config between restarts
118+
- **🚀 Production**: Mount dedicated volume for data persistence
119+
- **🧪 Testing**: Run without volume for clean ephemeral instances
120+
- **👥 Teams**: Share `config.json` in version control, mount directory with volume
121+
122+
### 3. Test the API
53123

54124
```bash
55125
# Make your first request
@@ -131,31 +201,33 @@ client = genai.Client(
131201

132202
---
133203

134-
## 🚀 Next Steps (2 minutes each)
204+
## 🚀 Next Steps (30 seconds each)
205+
206+
### **🖥️ Add Multiple Providers via Web UI**
135207

136-
### **🔗 Add Multiple Providers**
208+
1. Open `http://localhost:8080` in your browser
209+
2. Click **"Add Provider"**
210+
3. Select **OpenAI**, enter your API key, choose models
211+
4. Click **"Add Provider"** again
212+
5. Select **Anthropic**, enter your API key, choose models
213+
6. **Done!** Your providers are now load-balanced automatically
214+
215+
### **📡 Or Add Multiple Providers via API**
137216

138217
```bash
139-
# Create config.json
140-
echo '{
141-
"providers": {
142-
"openai": {
143-
"keys": [{"value": "env.OPENAI_API_KEY", "models": ["gpt-4o-mini"], "weight": 1.0}]
144-
},
145-
"anthropic": {
146-
"keys": [{"value": "env.ANTHROPIC_API_KEY", "models": ["claude-3-sonnet-20240229"], "weight": 1.0}]
147-
}
148-
}
149-
}' > config.json
218+
# Add OpenAI
219+
curl -X POST http://localhost:8080/providers \
220+
-H "Content-Type: application/json" \
221+
-d '{"provider": "openai", "keys": [{"value": "env.OPENAI_API_KEY", "models": ["gpt-4o-mini"], "weight": 1.0}]}'
222+
223+
# Add Anthropic
224+
curl -X POST http://localhost:8080/providers \
225+
-H "Content-Type: application/json" \
226+
-d '{"provider": "anthropic", "keys": [{"value": "env.ANTHROPIC_API_KEY", "models": ["claude-3-sonnet-20240229"], "weight": 1.0}]}'
150227

151228
# Set environment variables
229+
export OPENAI_API_KEY="your-openai-key"
152230
export ANTHROPIC_API_KEY="your-anthropic-key"
153-
154-
# Start with config
155-
docker run -p 8080:8080 \
156-
-v $(pwd)/config.json:/app/config/config.json \
157-
-e OPENAI_API_KEY -e ANTHROPIC_API_KEY \
158-
maximhq/bifrost
159231
```
160232

161233
### **⚡ Test Different Providers**
@@ -239,12 +311,14 @@ response, err := http.Post(
239311

240312
## 🔧 Setup Methods Comparison
241313

242-
| Method | Pros | Use When |
243-
| ------------- | ----------------------------------------------- | -------------------------------- |
244-
| **Docker** | No Go installation needed, isolated environment | Production, CI/CD, quick testing |
245-
| **Go Binary** | Direct execution, easier debugging | Development, custom builds |
314+
| Method | Pros | Use When |
315+
| --------------- | ----------------------------------------------- | -------------------------------- |
316+
| **Zero Config** | No files needed, visual setup, instant start | Quick testing, demos, new users |
317+
| **File-Based** | Version control, automation, deployment | Production, CI/CD, team setups |
318+
| **Docker** | No Go installation needed, isolated environment | Production, CI/CD, quick testing |
319+
| **Go Binary** | Direct execution, easier debugging | Development, custom builds |
246320

247-
Both methods require the same `config.json` file and environment variables.
321+
**Note:** When using file-based config, Bifrost only looks for `config.json` in your specified app directory.
248322

249323
---
250324

@@ -274,10 +348,12 @@ If you're building a Go application and want direct integration, try the **[Go P
274348

275349
## 💡 Why HTTP Transport?
276350

277-
-**Language agnostic** - Use from Python, Node.js, PHP, etc.
278-
-**Drop-in replacement** - Zero code changes for existing apps
279-
-**OpenAI compatible** - All responses follow OpenAI structure
280-
-**Microservices ready** - Centralized AI gateway
281-
-**Production features** - Health checks, metrics, monitoring
351+
- **🖥️ Built-in Web UI** - Visual configuration, monitoring, and analytics
352+
- **🚀 Zero configuration** - Start instantly, configure dynamically
353+
- **🌐 Language agnostic** - Use from Python, Node.js, PHP, etc.
354+
- **🔄 Drop-in replacement** - Zero code changes for existing apps
355+
- **🔗 OpenAI compatible** - All responses follow OpenAI structure
356+
- **⚙️ Microservices ready** - Centralized AI gateway
357+
- **📊 Production features** - Health checks, metrics, monitoring
282358

283359
**🎯 Ready for production? Check out [Complete HTTP Usage Guide](../usage/http-transport/)**

0 commit comments

Comments
 (0)