any-llm
offers:
- Simple, unified interface - one function for all providers, switch models with just a string change
- Developer friendly - full type hints for better IDE support and clear, actionable error messages
- Leverages official provider SDKs when available, reducing maintenance burden and ensuring compatibility
- Stays framework-agnostic so it can be used across different projects and use cases
- Actively maintained - we use this in our own product (any-agent) ensuring continued support
- No Proxy or Gateway server required so you don't need to deal with setting up any other service to talk to whichever LLM provider you need.
The landscape of LLM provider interfaces presents a fragmented ecosystem with several challenges that any-llm
aims to address:
The Challenge with API Standardization:
While the OpenAI API has become the de facto standard for LLM provider interfaces, providers implement slight variations. Some providers are fully OpenAI-compatible, while others may have different parameter names, response formats, or feature sets. This creates a need for light wrappers that can gracefully handle these differences while maintaining a consistent interface.
Existing Solutions and Their Limitations:
- LiteLLM: While popular, it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications
- AISuite: Offers a clean, modular approach but lacks active maintenance, comprehensive testing, and modern Python typing standards.
- Framework-specific solutions: Some agent frameworks either depend on LiteLLM or implement their own provider integrations, creating fragmentation
- Proxy Only Solutions: solutions like OpenRouter and Portkey require a hosted proxy to serve as the interface between your code and the LLM provider.
Try any-llm
in action with our interactive chat demo that showcases streaming completions and provider switching:
The demo features:
- Real-time streaming responses with character-by-character display
- Support for multiple LLM providers with easy switching
- Collapsible "thinking" content display for supported models
- Clean chat interface with auto-scrolling
- Python 3.11 or newer
- API_KEYS to access to whichever LLM you choose to use.
In your pip install, include the supported providers that you plan on using, or use the all
option if you want to install support for all any-llm
supported providers.
pip install 'any-llm-sdk[mistral,ollama]'
Make sure you have the appropriate API key environment variable set for your provider. Alternatively,
you could use the api_key
parameter when making a completion call instead of setting an environment variable.
export MISTRAL_API_KEY="YOUR_KEY_HERE" # or OPENAI_API_KEY, etc
Recommended approach: Use separate provider
and model
parameters:
from any_llm import completion
import os
# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')
response = completion(
model="mistral-small-latest",
provider="mistral",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Alternative syntax: You can also use the combined provider:model
format:
response = completion(
model="mistral:mistral-small-latest", # <provider_id>:<model_id>
messages=[{"role": "user", "content": "Hello!"}]
)
The provider_id should be specified according to the provider ids supported by any-llm.
The model_id
portion is passed directly to the provider internals: to understand what model ids are available for a provider,
you will need to refer to the provider documentation or use our list_models
API if the provider supports that API.
For providers that implement the OpenAI-style Responses API, use responses
or aresponses
:
from any_llm import responses
result = responses(
model="gpt-4o-mini",
provider="openai",
input_data=[
{"role": "user", "content": [
{"type": "text", "text": "Summarize this in one sentence."}
]}
],
)
# Non-streaming returns an OpenAI-compatible Responses object alias
print(result.output_text)