Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 50 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,43 +26,47 @@ Bifrost is a high-performance AI gateway that connects you to 8+ providers (Open

📖 For detailed setup guides with multiple providers, advanced configuration, and language examples, see [Quick Start Documentation](./docs/quickstart/README.md)

**Step 1:** Create your config (copy & paste this)

```json
{
"providers": {
"openai": {
"keys": [
{
"value": "env.OPENAI_API_KEY",
"models": ["gpt-4o-mini"],
"weight": 1.0
}
]
}
}
}
**Step 1:** Start Bifrost (choose one)

```bash
# 🐳 Docker (easiest - zero config needed!)
docker pull maximhq/bifrost
docker run -p 8080:8080 maximhq/bifrost

# 🔧 Or install Go binary (Make sure Go is in your PATH)
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
bifrost-http -port 8080
```

**Step 2:** Add your API key
**Step 2:** Open the built-in web interface

```bash
export OPENAI_API_KEY=your_openai_api_key
# 🖥️ Configure visually - no config files needed!
# macOS:
open http://localhost:8080

# Linux:
xdg-open http://localhost:8080

# Windows:
start http://localhost:8080

# Or simply open http://localhost:8080 manually in your browser
```

**Step 3:** Start Bifrost (choose one)
**Step 3:** Add your provider via the web UI or API

```bash
# 🐳 Docker
docker pull maximhq/bifrost
docker run -p 8080:8080 \
-v $(pwd)/config.json:/app/config/config.json \
-e OPENAI_API_KEY \
maximhq/bifrost
# Via Web UI: Just click "Add Provider" and enter your OpenAI API key
# Or via API:
curl -X POST http://localhost:8080/providers \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"keys": [{"value": "env.OPENAI_API_KEY", "models": ["gpt-4o-mini"], "weight": 1.0}]
}'

# 🔧 Or install Go binary (Make sure Go is in your PATH)
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
bifrost-http -config config.json -port 8080
# Make sure to set the environment variable OPENAI_API_KEY in bifrost's session, or pass it as a flag in Docker (docker run -e OPENAI_API_KEY maximhq/bifrost).
```

**Step 4:** Test it works
Expand All @@ -81,11 +85,12 @@ curl -X POST http://localhost:8080/v1/chat/completions \

**🎉 Boom! You're done!**

Your AI gateway is now running and ready for production. You can:
Your AI gateway is now running with a beautiful web interface. You can:

- Add more providers for automatic failover
- Scale to thousands of requests per second
- Drop this into existing OpenAI/Anthropic code with zero changes
- **🖥️ Configure everything visually** - No more JSON files!
- **📊 Monitor requests in real-time** - See logs, analytics, and metrics
- **🔄 Add providers and MCP clients on-the-fly** - Scale and failover without restarts
- **🚀 Drop into existing code** - Zero changes to your OpenAI/Anthropic apps

> **Want more?** See our [Complete Setup Guide](./docs/quickstart/http-transport.md) for multi-provider configuration, failover strategies, and production deployment.

Expand All @@ -112,18 +117,19 @@ Your AI gateway is now running and ready for production. You can:

## ✨ Features

- **Multi-Provider Support**: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
- **Fallback Mechanisms**: Automatically retry failed requests with alternative models or providers
- **Dynamic Key Management**: Rotate and manage API keys efficiently with weighted distribution
- **Connection Pooling**: Optimize network resources for better performance
- **Concurrency Control**: Manage rate limits and parallel requests effectively
- **Flexible Transports**: Multiple transports for easy integration into your infra
- **Plugin First Architecture**: No callback hell, simple addition/creation of custom plugins
- **MCP Integration**: Built-in Model Context Protocol (MCP) support for external tool integration and execution
- **Custom Configuration**: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
- **Built-in Observability**: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
- **SDK Support**: Bifrost is available as a Go package, so you can use it directly in your own applications.
- **Seamless Integration with Generative AI SDKs**: Effortlessly transition to Bifrost by simply updating the `base_url` in your existing SDKs, such as OpenAI, Anthropic, GenAI, and more. Just one line of code is all it takes to make the switch.
- **🖥️ Built-in Web UI**: Visual configuration, real-time monitoring, and analytics dashboard - no config files needed
- **🚀 Zero-Config Startup & Easy Integration**: Start immediately with dynamic provider configuration, or integrate existing SDKs by simply updating the `base_url` - one line of code to get running
- **🔄 Multi-Provider Support**: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
- **🛡️ Fallback Mechanisms**: Automatically retry failed requests with alternative models or providers
- **🔑 Dynamic Key Management**: Rotate and manage API keys efficiently with weighted distribution
- **⚡ Connection Pooling**: Optimize network resources for better performance
- **🎯 Concurrency Control**: Manage rate limits and parallel requests effectively
- **🔌 Flexible Transports**: Multiple transports for easy integration into your infra
- **🏗️ Plugin First Architecture**: No callback hell, simple addition/creation of custom plugins
- **🛠️ MCP Integration**: Built-in Model Context Protocol (MCP) support for external tool integration and execution
- **⚙️ Custom Configuration**: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
- **📊 Built-in Observability**: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
- **🔧 SDK Support**: Bifrost is available as a Go package, so you can use it directly in your own applications

---

Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Choose your preferred way to use Bifrost:
| **🔧 Go Package** | Direct integration, maximum control | 2 minutes | [📖 Go Package Guide](quickstart/go-package.md) |
| **🌐 HTTP Transport** | Language-agnostic, microservices | 30 seconds | [📖 HTTP Transport Guide](quickstart/http-transport.md) |

**New to Bifrost?** Start with [⚡ Quick Start](quickstart/) to get running in under 30 seconds.
**New to Bifrost?** Start with [⚡ Quick Start](quickstart/) to get running with zero configuration in under 30 seconds.

---

Expand Down
9 changes: 6 additions & 3 deletions docs/contributing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,17 @@ Ready to contribute? Here's your fastest path to making an impact:
```bash
# 1. Fork and clone
git clone https://github.com/YOUR_USERNAME/bifrost.git
cd bifrost
cd bifrost/core # or bifrost/transports

# 2. Install dependencies
go mod download

# 3. Verify setup
go test ./core/...
cd transports && go build -o bifrost-http
cd ../tests/core-providers/
go test -run TestOpenAI # or any provider you want to test

cd ../transports-integrations/
# read the README.md file in the transports-integrations directory for testing instructions

# 4. You're ready! 🎉
```
Expand Down
Binary file modified docs/media/cover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/quickstart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,13 @@ Get up and running with Bifrost in under 30 seconds. Choose your preferred integ

## 🌐 **HTTP Transport** - Choose if you:

- ✅ Want a clean UI for configuration and monitoring
- ✅ Use any programming language (Python, Node.js, etc.)
- ✅ Want to keep AI logic separate from your application
- ✅ Need a centralized AI gateway for multiple services
- ✅ Prefer REST API integration patterns
- ✅ Want drop-in compatibility with existing provider SDKs
- ✅ Want to **add providers & MCP clients on-the-fly** without restarts

**→ [Start with HTTP Transport](http-transport.md)**

Expand Down
Loading