You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+42-44Lines changed: 42 additions & 44 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,43 +26,38 @@ Bifrost is a high-performance AI gateway that connects you to 8+ providers (Open
26
26
27
27
📖 For detailed setup guides with multiple providers, advanced configuration, and language examples, see [Quick Start Documentation](./docs/quickstart/README.md)
28
28
29
-
**Step 1:** Create your config (copy & paste this)
30
-
31
-
```json
32
-
{
33
-
"providers": {
34
-
"openai": {
35
-
"keys": [
36
-
{
37
-
"value": "env.OPENAI_API_KEY",
38
-
"models": ["gpt-4o-mini"],
39
-
"weight": 1.0
40
-
}
41
-
]
42
-
}
43
-
}
44
-
}
29
+
**Step 1:** Start Bifrost (choose one)
30
+
31
+
```bash
32
+
# 🐳 Docker (easiest - zero config needed!)
33
+
docker pull maximhq/bifrost
34
+
docker run -p 8080:8080 maximhq/bifrost
35
+
36
+
# 🔧 Or install Go binary (Make sure Go is in your PATH)
37
+
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
38
+
bifrost-http -port 8080
45
39
```
46
40
47
-
**Step 2:**Add your API key
41
+
**Step 2:**Open the built-in web interface
48
42
49
43
```bash
50
-
export OPENAI_API_KEY=your_openai_api_key
44
+
# 🖥️ Configure visually - no config files needed!
45
+
open http://localhost:8080
51
46
```
52
47
53
-
**Step 3:**Start Bifrost (choose one)
48
+
**Step 3:**Add your provider via the web UI or API
54
49
55
50
```bash
56
-
# 🐳 Docker
57
-
docker pull maximhq/bifrost
58
-
docker run -p 8080:8080 \
59
-
-v $(pwd)/config.json:/app/config/config.json \
60
-
-e OPENAI_API_KEY \
61
-
maximhq/bifrost
51
+
# Via Web UI: Just click "Add Provider" and enter your OpenAI API key
# 🔧 Or install Go binary (Make sure Go is in your PATH)
64
-
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
65
-
bifrost-http -config config.json -port 8080
60
+
# Make sure to set the environment variable OPENAI_API_KEY in bifrost's session, or pass it as a flag in Docker (docker run -e OPENAI_API_KEY maximhq/bifrost).
66
61
```
67
62
68
63
**Step 4:** Test it works
@@ -81,11 +76,12 @@ curl -X POST http://localhost:8080/v1/chat/completions \
81
76
82
77
**🎉 Boom! You're done!**
83
78
84
-
Your AI gateway is now running and ready for production. You can:
79
+
Your AI gateway is now running with a beautiful web interface. You can:
85
80
86
-
- Add more providers for automatic failover
87
-
- Scale to thousands of requests per second
88
-
- Drop this into existing OpenAI/Anthropic code with zero changes
81
+
-**🖥️ Configure everything visually** - No more JSON files!
82
+
-**📊 Monitor requests in real-time** - See logs, analytics, and metrics
83
+
-**🔄 Add providers and MCP clients on-the-fly** - Scale and failover without restarts
84
+
-**🚀 Drop into existing code** - Zero changes to your OpenAI/Anthropic apps
89
85
90
86
> **Want more?** See our [Complete Setup Guide](./docs/quickstart/http-transport.md) for multi-provider configuration, failover strategies, and production deployment.
91
87
@@ -112,18 +108,20 @@ Your AI gateway is now running and ready for production. You can:
112
108
113
109
## ✨ Features
114
110
115
-
-**Multi-Provider Support**: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
116
-
-**Fallback Mechanisms**: Automatically retry failed requests with alternative models or providers
117
-
-**Dynamic Key Management**: Rotate and manage API keys efficiently with weighted distribution
118
-
-**Connection Pooling**: Optimize network resources for better performance
119
-
-**Concurrency Control**: Manage rate limits and parallel requests effectively
120
-
-**Flexible Transports**: Multiple transports for easy integration into your infra
121
-
-**Plugin First Architecture**: No callback hell, simple addition/creation of custom plugins
122
-
-**MCP Integration**: Built-in Model Context Protocol (MCP) support for external tool integration and execution
123
-
-**Custom Configuration**: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
124
-
-**Built-in Observability**: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
125
-
-**SDK Support**: Bifrost is available as a Go package, so you can use it directly in your own applications.
126
-
-**Seamless Integration with Generative AI SDKs**: Effortlessly transition to Bifrost by simply updating the `base_url` in your existing SDKs, such as OpenAI, Anthropic, GenAI, and more. Just one line of code is all it takes to make the switch.
111
+
-**🖥️ Built-in Web UI**: Visual configuration, real-time monitoring, and analytics dashboard - no config files needed
112
+
-**🚀 Zero Configuration Startup**: Start immediately, add providers dynamically via web interface or API
113
+
-**🔄 Multi-Provider Support**: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
114
+
-**🛡️ Fallback Mechanisms**: Automatically retry failed requests with alternative models or providers
115
+
-**🔑 Dynamic Key Management**: Rotate and manage API keys efficiently with weighted distribution
116
+
-**⚡ Connection Pooling**: Optimize network resources for better performance
117
+
-**🎯 Concurrency Control**: Manage rate limits and parallel requests effectively
118
+
-**🔌 Flexible Transports**: Multiple transports for easy integration into your infra
119
+
-**🏗️ Plugin First Architecture**: No callback hell, simple addition/creation of custom plugins
120
+
-**🛠️ MCP Integration**: Built-in Model Context Protocol (MCP) support for external tool integration and execution
121
+
-**⚙️ Custom Configuration**: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
122
+
-**📊 Built-in Observability**: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
123
+
-**🔧 SDK Support**: Bifrost is available as a Go package, so you can use it directly in your own applications
124
+
-**🔄 Seamless Integration with Generative AI SDKs**: Effortlessly transition to Bifrost by simply updating the `base_url` in your existing SDKs, such as OpenAI, Anthropic, GenAI, and more. Just one line of code is all it takes to make the switch
0 commit comments