-
Notifications
You must be signed in to change notification settings - Fork 221
feat: add OpenAI SDK compatibility layer #180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Implement complete OpenAI API compatibility for seamless migration - Add /v1/models and /v1/chat/completions endpoints with streaming support - Integrate with existing LiteLLM completion system and RAG functionality - Support OpenAI SDK drop-in replacement with enhanced features: * RAG-enabled completions with document retrieval * Persistent chat sessions * Streaming responses * Structured output (JSON mode) * Multimodal support (text + images) - Add comprehensive test suite and usage examples - Maintain existing authentication and authorization Co-authored-by: ArnavAgrawal03 <ArnavAgrawal03@users.noreply.github.com> 🤖 Generated with [Claude Code](https://claude.ai/code)
Bug Report
Comments? Email us. |
|
@claude please resolve this comment |
Claude finished @ArnavAgrawal03's task in 4m 24s —— View job Resolving Rate Limiting Security Issue ✅I've successfully addressed the high-severity rate limiting vulnerability in the OpenAI compatibility API. Todo List:
Analysis Findings:Current State: The OpenAI compatibility endpoints ( Existing Rate Limiting: The codebase has a sophisticated tier-based rate limiting system in Solution: Apply the existing Implementation Complete:✅ Added rate limiting to both endpoints:
✅ Rate limiting logic:
✅ Comprehensive test suite:
✅ Changes committed: 38bd16a Security Impact: This resolves the high-severity vulnerability by applying the same battle-tested rate limiting used throughout the codebase to the OpenAI compatibility endpoints. The endpoints are now protected against DoS attacks and resource exhaustion. |
- Add rate limiting to /v1/models and /v1/chat/completions endpoints - Use existing check_and_increment_limits with "query" limit type - Only applies in cloud mode with authenticated users - Returns HTTP 429 when limits exceeded - Add comprehensive test suite for rate limiting scenarios Resolves high-severity security vulnerability allowing DoS attacks on OpenAI compatibility API endpoints. Co-authored-by: ArnavAgrawal03 <ArnavAgrawal03@users.noreply.github.com>
Bug Report
Comments? Email us. |
Implements OpenAI SDK compatibility for Morphik as requested in issue #142.
Summary
Implementation
/v1/models
- List available models in OpenAI format/v1/chat/completions
- Chat completions with streaming supportBenefits
Closes #142
Generated with Claude Code