Skip to content

Conversation

@tremlin
Copy link
Contributor

@tremlin tremlin commented Sep 5, 2025

Title

Fix missing IMAGE token count for Gemini cost calculation

Relevant issues

fixes #14285

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

One unit test fails, but this is unrelated to this PR.

ERROR tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py::TestOpenAIFieldExclusionRegistry::test_convenience_registration_method - ImportError: cannot import name 'OpenAIFieldExclusionRegistry' from 'litellm.llms.openai.responses.transformation'

Type

🐛 Bug Fix

Changes

Counts IMAGE tokens as TEXT tokens because this seems to be the best fix.
Image input for Gemini is usually billed by token with the same rate as text.

@vercel
Copy link

vercel bot commented Sep 5, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Sep 11, 2025 7:50am

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't it be better to track it separately (similar to text and audio tokens)

so that if there is a price change in future models, the code 'just works' ? @tremlin

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krrishdholakia Thank you for your feedback.
I've also considered this (counting image tokens separately), but I wasn't sure what the best approach was.

What do you think about the following:

  • introduce a new pricing key option input_cost_per_image_token in model_prices_and_context_window.json
  • in litellm/litellm_core_utils/llm_cost_calc/utils.py change code of calculate_cost_component and generic_cost_per_token such that
    • when the prompt_tokens_details contain an image_token_count (also to be added), then first it is being checked whether the model pricing contains the new input_cost_per_image_token option.
    • If it is does not contain the new option, then use input_cost_per_token as default for cost calculation.

Alternatively, we could also decide to only count image tokens separately, but use always input_cost_per_token for cost calculation. This would avoid the introduction of a new pricing parameter that is not needed at the moment.

Both approaches have a sane default, which I think is important because it's hard to get pricing parameters right when there are too many options.

Copy link
Contributor

@krrishdholakia krrishdholakia Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input_cost_per_image_token

yes this makes sense, it follows the existing patterns as well! please update and bump me once it's ready for review

thank you for your help!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krrishdholakia

I've finished my implementation and I am looking forward to your feedback!
I will be traveling starting next week, so I might not respond very quickly.

Some CI tests are failing, but all of these seem unrelated to my changes. When I run the unit tests locally, I get:

============================================================= short test summary info =============================================================
FAILED tests/test_litellm/llms/databricks/chat/test_databricks_chat_transformation.py::test_convert_anthropic_tool_to_databricks_tool_with_description - TypeError: litellm.types.llms.databricks.DatabricksFunction() got multiple values for keyword argument 'name'
FAILED tests/test_litellm/llms/databricks/chat/test_databricks_chat_transformation.py::test_convert_anthropic_tool_to_databricks_tool_without_description - TypeError: litellm.types.llms.databricks.DatabricksFunction() got multiple values for keyword argument 'name'
FAILED tests/test_litellm/litellm_core_utils/test_duration_parser.py::TestStandardizedResetTime::test_timezone_handling - zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key US/Eastern'
===================================== 3 failed, 2948 passed, 15 skipped, 2830 warnings in 2068.82s (0:34:28) ======================================

@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale label Dec 11, 2025
@holtenko
Copy link

same issue, image tokens will not be included.

@github-actions github-actions bot removed the stale label Dec 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Missing Gemini IMAGE tokens in cost calculation

3 participants