-
Notifications
You must be signed in to change notification settings - Fork 223
Feature: Enable programmatically pass in api_key besides reading from env #1134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
aryasoni98
wants to merge
14
commits into
cocoindex-io:main
Choose a base branch
from
aryasoni98:issue-994
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+102
−49
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
georgeh0
reviewed
Oct 4, 2025
- Fixed Python line length issue in llm.py by breaking long type annotation - Fixed Rust function signature formatting in all LLM client files - Fixed long function call formatting in embed_text.rs - All formatting now complies with project standards
- Fixed api_bail! usage in context expecting LlmApiConfig return type - Replaced unwrap_or_else with proper if-let pattern matching - Resolves compilation error in GitHub Actions build test
- Removed trailing whitespace from all LLM client files - Fixed formatting issues in gemini.rs, litellm.rs, openai.rs, openrouter.rs, vllm.rs - Fixed trailing whitespace in embed_text.rs - All files now comply with cargo fmt standards
georgeh0
reviewed
Oct 5, 2025
georgeh0
reviewed
Oct 10, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR implements the ability to pass API keys programmatically to the
EmbedText
function, enabling portable AI agents that don't depend on host environment variables. This is particularly useful for CI/CD environments like Bitbucket pipelines where environment variables are managed through pipeline configuration.Solution
Added
api_key
parameter toEmbedText
function and extended the LLM configuration system to support programmatic API key passing across all LLM providers.🔧 Core Functionality
api_key
parameter toEmbedText
function spec📁 Files Modified
Python Files
python/cocoindex/functions.py
- Addedapi_key
parameter toEmbedText
python/cocoindex/functions/_engine_builtin_specs.py
- Addedapi_key
parameter toEmbedText
python/cocoindex/llm.py
- Added new LLM config classes with API key supportRust Files
src/llm/mod.rs
- ExtendedLlmApiConfig
enum with new config typessrc/llm/voyage.rs
- Updated Voyage client to accept API key from configsrc/llm/gemini.rs
- Updated Gemini client to accept API key from configsrc/llm/anthropic.rs
- Updated Anthropic client to accept API key from configsrc/llm/openai.rs
- Updated OpenAI client to accept API key from configsrc/llm/litellm.rs
- Updated LiteLLM client to accept API key from configsrc/llm/openrouter.rs
- Updated OpenRouter client to accept API key from configsrc/llm/vllm.rs
- Updated VLLM client to accept API key from configsrc/ops/functions/embed_text.rs
- Updated EmbedText function to handleapi_key
parameter🚀 Usage Examples
Before (Environment Variable Only)
After (Programmatic API Key)
Bitbucket Pipeline Example
Multiple API Types
This implementation maintains full backward compatibility:
api_key
is provided, the system falls back to environment variables🧪 Testing
📋 New LLM Config Classes
Added the following config classes with API key support:
AnthropicConfig
- For Anthropic Claude modelsGeminiConfig
- For Google Gemini modelsVoyageConfig
- For Voyage AI modelsLiteLlmConfig
- For LiteLLM proxy modelsOpenRouterConfig
- For OpenRouter modelsVllmConfig
- For VLLM modelsOpenAiConfig
- Addedapi_key
field🔍 Implementation Details
Rust Side
LlmApiConfig
enum to include all new config typesapi_config
parameterEmbedText
function to create appropriate API configs based onapi_key
parameterPython Side
api_key
parameter toEmbedText
function specLlmSpec
to support all new config types🎉 Result
Issue #994 is now 100% resolved.