Skip to content

Conversation

@Sameerlite
Copy link
Collaborator

Title

Fix: Populate spend_logs_metadata in batch and files endpoints

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details

  • I have added a screenshot of my new test passing locally

  • My PR passes all unit tests on make test-unit

  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Problem

When users passed litellm_metadata with spend_logs_metadata via extra_body in OpenAI SDK calls, the metadata field appeared as null in logs instead of containing the expected values.

Root Cause

  1. Files endpoint: The create_file endpoint wasn't reading litellm_metadata from multipart form data
  2. Parsing logic: The add_litellm_data_to_request function wasn't parsing litellm_metadata when sent as a JSON string (common in form data)

Solution

1. Updated litellm/proxy/openai_files_endpoints/files_endpoints.py

  • Added litellm_metadata: Optional[str] = Form(default=None) parameter to create_file endpoint
  • Extract metadata from form field before processing request

2. Updated litellm/proxy/litellm_pre_call_utils.py

  • Added parsing logic to convert litellm_metadata from JSON string to dict
  • Added merging logic to combine user metadata with system metadata
  • User-provided values take precedence over team defaults

Usage

For POST requests (create_batch, create_file):

client.files.create(
    file=open("data.jsonl", "rb"),
    purpose="batch",
    extra_body={
        "litellm_metadata": {
            "spend_logs_metadata": {
                "owner": "team-name",
                "product": "my-product"
            }
        }
    }
)

For GET requests (retrieve_batch, file_content):

client.batches.retrieve(
    batch_id,
    extra_headers={
        "x-litellm-spend-logs-metadata": json.dumps({
            "owner": "team-name",
            "product": "my-product"
        })
    }
)

Testing

image image

@vercel
Copy link

vercel bot commented Nov 21, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Error Error Nov 21, 2025 11:45am

@Sameerlite Sameerlite marked this pull request as ready for review November 21, 2025 13:41
@krrishdholakia krrishdholakia merged commit a7e3d13 into litellm_sameer_nov_3 Nov 22, 2025
31 of 51 checks passed
ishaan-jaff pushed a commit that referenced this pull request Nov 22, 2025
* Add openai metadata filed in the request

* Add docs related to openai metadata

* Add utils

* test_completion_openai_metadata[True]

* Added support for though signature for gemini 3 in responses api (#16872)

* Added support for though signature for gemini 3

* Update docs with all supported endpoints and cost tracking

* Added config based routing support for batches and files

* fix lint errors

* Litellm anthropic image url support (#16868)

* Add image as url support to anthropic

* fix mypy errors

* fix tests

* Fix: Populate spend_logs_metadata in batch and files endpoints (#16921)

* Add spend-logs-metadata to the metadata

* Add tests for spend logs metadata in batches

* use better names

* Remove support for penalty param for gemini 3 (#16907)

* Remove support for penalty param

* remove halucinated model names

* fix mypy/test errors

* fix tests

* fix too many lines error

* fix too many lines error

* Add config for cicd test case

* Fix final tests

* fix batch tests

* fix batch tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants