-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
feat(black_forest_labs): add native image edit support for Black Forest Labs #18006
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…st Labs Add native integration for Black Forest Labs image editing models (flux-kontext-pro, flux-kontext-max, flux-pro-1.0-fill, flux-pro-1.0-expand). Changes: - Add BlackForestLabsImageEditConfig for BFL API transformation - Add BLACK_FOREST_LABS to LlmProviders enum - Add use_multipart_form_data() to BaseImageEditConfig for JSON vs form-data - Modify image_edit_handler to support JSON request bodies - Add comprehensive unit tests Closes BerriAI#11401
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Add BFL models to model_prices_and_context_window.json with pricing: - flux-kontext-pro: $0.04/image - flux-kontext-max: $0.08/image - flux-pro-1.0-fill: $0.05/image - flux-pro-1.0-expand: $0.05/image Add black_forest_labs_models set to __init__.py for model discovery.
| start_time = time.time() | ||
|
|
||
| while time.time() - start_time < max_wait: | ||
| response = httpx.get( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transformation files should not be making http requests
we'd want the base llm http handler to do this for us @Chesars
we should also try and use the _get_httpx_client (and the async version) where possible, to avoid creating a new httpx client on each request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding HTTP requests in transformation files, providers: Gemini videos, RunwayML (videos/image_generation/TTS), Azure AI OCR, Fireworks and Ollama make HTTP requests in transformation files. BFL uses the generic llm_http_handler
With _get_httpx_client: Fixed in b631c48
Resolve comment conflict in llm_http_handler.py by combining BFL and Gemini style comments for JSON request handling.
Replace direct httpx.get() calls with _get_httpx_client() to reuse cached HTTP client, following the pattern used by other providers (RunwayML, Azure AI OCR, Sagemaker, etc.).
Title
Add native image edit support for Black Forest Labs
Relevant issues
Fixes #11401
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🆕 New Feature
Changes
Add native integration for Black Forest Labs image editing models (flux-kontext-pro, flux-kontext-max, flux-pro-1.0-fill, flux-pro-1.0-expand).
Details
use_multipart_form_data()method toBaseImageEditConfigto distinguish between providersUsage
Files Changed
litellm/llms/black_forest_labs/- New provider implementationcommon_utils.py- Error class and API constantsimage_edit/transformation.py- BlackForestLabsImageEditConfig with polling logiclitellm/llms/base_llm/image_edit/transformation.py- Adduse_multipart_form_data()methodlitellm/llms/custom_httpx/llm_http_handler.py- Support JSON request bodies for image_editlitellm/types/utils.py- Add BLACK_FOREST_LABS to LlmProviders enumlitellm/utils.py- Register provider in get_provider_image_edit_configTests Added