Skip to content

Conversation

@Chesars
Copy link
Contributor

@Chesars Chesars commented Dec 15, 2025

Title

Add native image edit support for Black Forest Labs

Relevant issues

Fixes #11401

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

Add native integration for Black Forest Labs image editing models (flux-kontext-pro, flux-kontext-max, flux-pro-1.0-fill, flux-pro-1.0-expand).

Details

  • Polling-based async API: BFL returns a task ID immediately, then we poll for the result
  • JSON request body: BFL uses JSON (not multipart form-data like OpenAI)
  • Added use_multipart_form_data() method to BaseImageEditConfig to distinguish between providers

Usage

import litellm
import os

os.environ["BFL_API_KEY"] = "your-api-key"

response = litellm.image_edit(
    model="black_forest_labs/flux-kontext-pro",
    image=open("input.png", "rb"),
    prompt="Add a green leaf",
)
print(response.data[0].url)

Files Changed

  • litellm/llms/black_forest_labs/ - New provider implementation
    • common_utils.py - Error class and API constants
    • image_edit/transformation.py - BlackForestLabsImageEditConfig with polling logic
  • litellm/llms/base_llm/image_edit/transformation.py - Add use_multipart_form_data() method
  • litellm/llms/custom_httpx/llm_http_handler.py - Support JSON request bodies for image_edit
  • litellm/types/utils.py - Add BLACK_FOREST_LABS to LlmProviders enum
  • litellm/utils.py - Register provider in get_provider_image_edit_config

Tests Added

Screenshot 2025-12-15 at 19 27 55

…st Labs

Add native integration for Black Forest Labs image editing models
(flux-kontext-pro, flux-kontext-max, flux-pro-1.0-fill, flux-pro-1.0-expand).

Changes:
- Add BlackForestLabsImageEditConfig for BFL API transformation
- Add BLACK_FOREST_LABS to LlmProviders enum
- Add use_multipart_form_data() to BaseImageEditConfig for JSON vs form-data
- Modify image_edit_handler to support JSON request bodies
- Add comprehensive unit tests

Closes BerriAI#11401
@vercel
Copy link

vercel bot commented Dec 15, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
litellm Ready Ready Preview, Comment Dec 17, 2025 3:20am

Add BFL models to model_prices_and_context_window.json with pricing:
- flux-kontext-pro: $0.04/image
- flux-kontext-max: $0.08/image
- flux-pro-1.0-fill: $0.05/image
- flux-pro-1.0-expand: $0.05/image

Add black_forest_labs_models set to __init__.py for model discovery.
start_time = time.time()

while time.time() - start_time < max_wait:
response = httpx.get(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transformation files should not be making http requests

we'd want the base llm http handler to do this for us @Chesars

we should also try and use the _get_httpx_client (and the async version) where possible, to avoid creating a new httpx client on each request

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding HTTP requests in transformation files, providers: Gemini videos, RunwayML (videos/image_generation/TTS), Azure AI OCR, Fireworks and Ollama make HTTP requests in transformation files. BFL uses the generic llm_http_handler

With _get_httpx_client: Fixed in b631c48

Resolve comment conflict in llm_http_handler.py by combining
BFL and Gemini style comments for JSON request handling.
Replace direct httpx.get() calls with _get_httpx_client() to reuse
cached HTTP client, following the pattern used by other providers
(RunwayML, Azure AI OCR, Sagemaker, etc.).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Support for Black Forest Labs' flux-kontext-pro model in LiteLLM's /images/edits endpoint

3 participants