Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Oct 30, 2025

📄 26% (0.26x) speedup for get_cost_for_web_search_request in litellm/llms/__init__.py

⏱️ Runtime : 308 microseconds 245 microseconds (best of 280 runs)

📝 Explanation and details

The optimization achieves a 25% speedup through several key improvements:

Import optimization: The most significant gain comes from moving PromptTokensDetailsWrapper and SearchContextCostPerQuery imports to module-level instead of function-level. The line profiler shows these imports taking 27-34% of function execution time in the original code. Moving them eliminates this per-call overhead.

Simplified control flow:

  • Removed unnecessary intermediate variable assignments (like total_cost in gemini, makes_web_search_request in vertex_ai)
  • Combined early return conditions into single compound if statements in the anthropic function
  • Changed if cost_per_web_search_request is None or cost_per_web_search_request == 0.0: to the more Pythonic if not cost_per_web_search_request: which is faster for truthiness checks

Direct return expressions: Instead of storing results in intermediate variables and then returning them, the optimized code returns calculated values directly, reducing memory allocations and variable lookups.

These optimizations are particularly effective for the test cases shown because:

  • Frequent calls with valid data (basic test cases) benefit most from eliminated import overhead
  • Early exit scenarios (edge cases with None values) see larger relative gains (up to 39% faster) due to simplified branching
  • Determinism tests show the import optimization clearly - second calls are 37-42% faster since imports are cached

The optimizations maintain identical functionality while reducing Python interpreter overhead through fewer operations per function call.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 63 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests and Runtime
import pytest
from litellm.llms.__init__ import get_cost_for_web_search_request

# --- Minimal class definitions to support the tests ---

class PromptTokensDetailsWrapper:
    def __init__(self, web_search_requests=None):
        self.web_search_requests = web_search_requests

class ServerToolUseWrapper:
    def __init__(self, web_search_requests=None):
        self.web_search_requests = web_search_requests

class Usage:
    def __init__(
        self,
        prompt_tokens_details=None,
        server_tool_use=None,
    ):
        self.prompt_tokens_details = prompt_tokens_details
        self.server_tool_use = server_tool_use

class ModelInfo(dict):
    pass

class SearchContextCostPerQuery(dict):
    pass

# --- Function under test (as per litellm/llms/__init__.py) ---

def cost_per_web_search_request_gemini(usage: "Usage", model_info: "ModelInfo") -> float:
    cost_per_web_search_request = 35e-3
    number_of_web_search_requests = 0
    if (
        usage is not None
        and usage.prompt_tokens_details is not None
        and isinstance(usage.prompt_tokens_details, PromptTokensDetailsWrapper)
        and hasattr(usage.prompt_tokens_details, "web_search_requests")
        and usage.prompt_tokens_details.web_search_requests is not None
    ):
        number_of_web_search_requests = usage.prompt_tokens_details.web_search_requests
    else:
        number_of_web_search_requests = 0
    total_cost = cost_per_web_search_request * number_of_web_search_requests
    return total_cost

def cost_per_web_search_request_vertex_ai(usage: "Usage", model_info: "ModelInfo") -> float:
    cost_per_llm_call_with_web_search = 35e-3
    makes_web_search_request = False
    if (
        usage is not None
        and usage.prompt_tokens_details is not None
        and isinstance(usage.prompt_tokens_details, PromptTokensDetailsWrapper)
    ):
        makes_web_search_request = True
    if makes_web_search_request:
        return cost_per_llm_call_with_web_search
    else:
        return 0.0
from litellm.llms.__init__ import get_cost_for_web_search_request

# --- Unit tests ---

# --------------------------
# 1. BASIC TEST CASES
# --------------------------

def test_gemini_basic_single_request():
    # Gemini: 1 web search request
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.42μs -> 4.18μs (29.7% faster)

def test_gemini_basic_multiple_requests():
    # Gemini: 3 web search requests
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=3))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.02μs -> 3.96μs (26.9% faster)

def test_anthropic_basic_single_request():
    # Anthropic: 1 web search request, cost per request = 0.05
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.63μs -> 4.69μs (19.9% faster)

def test_anthropic_basic_multiple_requests():
    # Anthropic: 4 web search requests, cost per request = 0.02
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=4))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.02})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.21μs -> 4.04μs (28.9% faster)

def test_vertex_ai_basic_request():
    # Vertex AI: prompt_tokens_details present
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 5.55μs -> 4.41μs (26.0% faster)

def test_vertex_ai_basic_no_request():
    # Vertex AI: prompt_tokens_details is None
    usage = Usage(prompt_tokens_details=None)
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 4.57μs -> 3.45μs (32.3% faster)

def test_non_supported_provider_returns_none():
    # Non-supported provider returns None
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=2))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("openai", usage, model_info) # 574ns -> 604ns (4.97% slower)

# --------------------------
# 2. EDGE TEST CASES
# --------------------------

def test_gemini_zero_web_search_requests():
    # Gemini: 0 web search requests
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=0))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.75μs -> 4.65μs (23.7% faster)

def test_gemini_missing_prompt_tokens_details():
    # Gemini: prompt_tokens_details is None
    usage = Usage(prompt_tokens_details=None)
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 4.52μs -> 3.41μs (32.4% faster)

def test_gemini_missing_web_search_requests_attr():
    # Gemini: prompt_tokens_details exists but web_search_requests attribute is missing
    class DummyPromptTokensDetails:
        pass
    usage = Usage(prompt_tokens_details=DummyPromptTokensDetails())
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.54μs -> 4.34μs (27.8% faster)

def test_gemini_web_search_requests_is_none():
    # Gemini: web_search_requests is None
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=None))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.16μs -> 4.00μs (28.8% faster)

def test_anthropic_model_info_none():
    # Anthropic: model_info is None
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1))
    model_info = None
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 4.60μs -> 3.63μs (26.7% faster)

def test_anthropic_usage_none():
    # Anthropic: usage is None
    usage = None
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.1})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 4.54μs -> 3.26μs (39.1% faster)

def test_anthropic_server_tool_use_none():
    # Anthropic: usage.server_tool_use is None
    usage = Usage(server_tool_use=None)
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.1})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 4.51μs -> 3.42μs (31.9% faster)

def test_anthropic_web_search_requests_none():
    # Anthropic: usage.server_tool_use.web_search_requests is None
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=None))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.1})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 4.48μs -> 3.48μs (29.0% faster)

def test_anthropic_search_context_cost_per_query_missing():
    # Anthropic: search_context_cost_per_query missing
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=2))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.63μs -> 4.51μs (24.9% faster)

def test_anthropic_search_context_size_medium_missing():
    # Anthropic: search_context_size_medium missing
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=2))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.02μs -> 4.19μs (20.0% faster)

def test_anthropic_search_context_size_medium_zero():
    # Anthropic: search_context_size_medium is zero
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=2))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.0})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 4.85μs -> 3.55μs (36.7% faster)

def test_vertex_ai_prompt_tokens_details_wrong_type():
    # Vertex AI: prompt_tokens_details is wrong type
    usage = Usage(prompt_tokens_details="not_a_wrapper")
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 5.51μs -> 4.40μs (25.3% faster)

def test_vertex_ai_usage_none():
    # Vertex AI: usage is None
    usage = None
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 4.32μs -> 3.33μs (29.5% faster)

def test_vertex_ai_model_info_none():
    # Vertex AI: model_info is None, should not affect cost
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1))
    model_info = None
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 5.59μs -> 4.59μs (21.8% faster)

def test_gemini_web_search_requests_negative():
    # Gemini: negative web_search_requests, should return negative cost
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=-2))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.60μs -> 4.68μs (19.7% faster)

def test_anthropic_web_search_requests_negative():
    # Anthropic: negative web_search_requests, should return negative cost
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=-3))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.1})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.76μs -> 4.68μs (23.1% faster)

# --------------------------
# 3. LARGE SCALE TEST CASES
# --------------------------

def test_gemini_large_number_of_requests():
    # Gemini: 999 web search requests
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=999))
    model_info = ModelInfo()
    expected_cost = 0.035 * 999
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.14μs -> 4.01μs (28.4% faster)

def test_anthropic_large_number_of_requests():
    # Anthropic: 1000 web search requests, cost per request = 0.01
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1000))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.01})})
    expected_cost = 0.01 * 1000
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.56μs -> 4.42μs (25.7% faster)

def test_vertex_ai_large_scale():
    # Vertex AI: prompt_tokens_details present, should always return flat cost
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=999))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 5.55μs -> 4.30μs (28.9% faster)

def test_gemini_high_web_search_requests():
    # Gemini: 1000 web search requests
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1000))
    model_info = ModelInfo()
    expected_cost = 0.035 * 1000
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.32μs -> 4.13μs (28.8% faster)

def test_anthropic_high_cost_per_request():
    # Anthropic: 10 requests, cost per request = 1.5
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=10))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 1.5})})
    expected_cost = 1.5 * 10
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.56μs -> 4.41μs (26.0% faster)

def test_anthropic_large_scale_zero_cost():
    # Anthropic: 1000 requests, but cost per request is zero
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1000))
    model_info = ModelInfo({"search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.0})})
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info) # 5.03μs -> 3.98μs (26.5% faster)

def test_gemini_large_scale_zero_requests():
    # Gemini: 1000 requests, but web_search_requests is zero
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=0))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info) # 5.39μs -> 4.28μs (25.7% faster)

def test_vertex_ai_large_scale_no_prompt_tokens_details():
    # Vertex AI: 1000 requests, but prompt_tokens_details is None
    usage = Usage(prompt_tokens_details=None)
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info) # 4.65μs -> 3.65μs (27.4% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import pytest
from litellm.llms.__init__ import get_cost_for_web_search_request

# --- Minimal stubs for dependent types/classes ---

class PromptTokensDetailsWrapper:
    def __init__(self, web_search_requests=None):
        self.web_search_requests = web_search_requests

class ServerToolUseWrapper:
    def __init__(self, web_search_requests=None):
        self.web_search_requests = web_search_requests

class Usage:
    def __init__(self, prompt_tokens_details=None, server_tool_use=None):
        self.prompt_tokens_details = prompt_tokens_details
        self.server_tool_use = server_tool_use

class ModelInfo(dict):
    """Simple dict-based stub for ModelInfo."""
    pass

class SearchContextCostPerQuery(dict):
    """Simple dict-based stub for SearchContextCostPerQuery."""
    pass

# --- Functions under test (minimal implementations for testing) ---

def cost_per_web_search_request_gemini(usage: "Usage", model_info: "ModelInfo") -> float:
    cost_per_web_search_request = 35e-3
    number_of_web_search_requests = 0
    if (
        usage is not None
        and usage.prompt_tokens_details is not None
        and isinstance(usage.prompt_tokens_details, PromptTokensDetailsWrapper)
        and hasattr(usage.prompt_tokens_details, "web_search_requests")
        and usage.prompt_tokens_details.web_search_requests is not None
    ):
        number_of_web_search_requests = usage.prompt_tokens_details.web_search_requests
    else:
        number_of_web_search_requests = 0
    total_cost = cost_per_web_search_request * number_of_web_search_requests
    return total_cost

def cost_per_web_search_request_vertex_ai(usage: "Usage", model_info: "ModelInfo") -> float:
    cost_per_llm_call_with_web_search = 35e-3
    makes_web_search_request = False
    if (
        usage is not None
        and usage.prompt_tokens_details is not None
        and isinstance(usage.prompt_tokens_details, PromptTokensDetailsWrapper)
    ):
        makes_web_search_request = True
    if makes_web_search_request:
        return cost_per_llm_call_with_web_search
    else:
        return 0.0
from litellm.llms.__init__ import get_cost_for_web_search_request

# --- Unit tests ---

# ------------------ BASIC TEST CASES ------------------

def test_gemini_basic_single_web_search():
    """Gemini: Single web search request should cost $0.035"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 5.48μs -> 4.43μs (23.7% faster)

def test_gemini_basic_multiple_web_searches():
    """Gemini: Multiple web search requests should cost $0.035 per request"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=3))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 5.04μs -> 4.08μs (23.6% faster)

def test_anthropic_basic_single_web_search():
    """Anthropic: Single web search request, cost set in model_info"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.62μs -> 4.60μs (22.3% faster)

def test_anthropic_basic_multiple_web_searches():
    """Anthropic: Multiple web search requests, cost set in model_info"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=4))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.17μs -> 4.22μs (22.4% faster)

def test_vertex_ai_basic_web_search():
    """Vertex AI: Any web search present should cost $0.035"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=99))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info); cost = codeflash_output # 5.28μs -> 4.27μs (23.7% faster)

def test_vertex_ai_basic_no_web_search():
    """Vertex AI: No web search (prompt_tokens_details is None) should cost $0.0"""
    usage = Usage(prompt_tokens_details=None)
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info); cost = codeflash_output # 4.50μs -> 3.53μs (27.5% faster)

def test_unknown_provider_returns_none():
    """Unknown provider should return None"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("unknown_provider", usage, model_info); cost = codeflash_output # 542ns -> 541ns (0.185% faster)

# ------------------ EDGE TEST CASES ------------------

def test_gemini_zero_web_search_requests():
    """Gemini: Zero web search requests should cost $0.0"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=0))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 6.21μs -> 4.98μs (24.8% faster)

def test_gemini_none_web_search_requests():
    """Gemini: web_search_requests is None should cost $0.0"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=None))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 5.46μs -> 4.36μs (25.2% faster)

def test_gemini_missing_prompt_tokens_details():
    """Gemini: usage.prompt_tokens_details is None should cost $0.0"""
    usage = Usage(prompt_tokens_details=None)
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 4.58μs -> 3.54μs (29.3% faster)

def test_gemini_usage_is_none():
    """Gemini: usage is None should cost $0.0"""
    codeflash_output = get_cost_for_web_search_request("gemini", None, ModelInfo()); cost = codeflash_output # 4.24μs -> 3.33μs (27.0% faster)

def test_anthropic_usage_is_none():
    """Anthropic: usage is None should cost $0.0"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    codeflash_output = get_cost_for_web_search_request("anthropic", None, model_info); cost = codeflash_output # 4.86μs -> 3.78μs (28.5% faster)

def test_anthropic_model_info_is_none():
    """Anthropic: model_info is None should cost $0.0"""
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, None); cost = codeflash_output # 4.37μs -> 3.40μs (28.4% faster)

def test_anthropic_no_server_tool_use():
    """Anthropic: usage.server_tool_use is None should cost $0.0"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=None)
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 4.52μs -> 3.50μs (29.3% faster)

def test_anthropic_web_search_requests_none():
    """Anthropic: usage.server_tool_use.web_search_requests is None should cost $0.0"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=None))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 4.51μs -> 3.41μs (32.2% faster)

def test_anthropic_missing_search_context_cost_per_query():
    """Anthropic: model_info missing search_context_cost_per_query should cost $0.0"""
    model_info = ModelInfo()
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=1))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.50μs -> 4.37μs (25.9% faster)

def test_anthropic_search_context_size_medium_zero():
    """Anthropic: search_context_size_medium is zero should cost $0.0"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.0})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=5))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.04μs -> 4.04μs (24.7% faster)

def test_vertex_ai_prompt_tokens_details_wrong_type():
    """Vertex AI: prompt_tokens_details is not PromptTokensDetailsWrapper should cost $0.0"""
    usage = Usage(prompt_tokens_details="not_a_wrapper")
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info); cost = codeflash_output # 5.63μs -> 4.57μs (23.1% faster)

def test_vertex_ai_usage_is_none():
    """Vertex AI: usage is None should cost $0.0"""
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", None, ModelInfo()); cost = codeflash_output # 4.42μs -> 3.38μs (30.8% faster)

# ------------------ LARGE SCALE TEST CASES ------------------

def test_gemini_large_scale_many_web_search_requests():
    """Gemini: Large number of web search requests (999)"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=999))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 5.57μs -> 4.75μs (17.4% faster)

def test_anthropic_large_scale_many_web_search_requests():
    """Anthropic: Large number of web search requests (999)"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=999))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.46μs -> 4.45μs (22.8% faster)

def test_vertex_ai_large_scale_web_search_requests():
    """Vertex AI: Large number of web search requests, still costs $0.035 per call"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=999))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info); cost = codeflash_output # 5.18μs -> 4.42μs (17.3% faster)

def test_gemini_non_integer_web_search_requests():
    """Gemini: web_search_requests is a float (should multiply as normal)"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=2.5))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 5.30μs -> 4.41μs (20.1% faster)

def test_anthropic_non_integer_web_search_requests():
    """Anthropic: web_search_requests is a float (should multiply as normal)"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=2.5))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.48μs -> 4.33μs (26.6% faster)

def test_gemini_negative_web_search_requests():
    """Gemini: Negative web_search_requests should multiply as normal (could be a bug)"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=-3))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); cost = codeflash_output # 5.28μs -> 4.20μs (25.9% faster)

def test_anthropic_negative_web_search_requests():
    """Anthropic: Negative web_search_requests should multiply as normal (could be a bug)"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=-2))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); cost = codeflash_output # 5.76μs -> 4.71μs (22.3% faster)

# ------------------ DETERMINISM TESTS ------------------

def test_gemini_determinism():
    """Gemini: Multiple calls with same input should return same result"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=10))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); result1 = codeflash_output # 5.49μs -> 4.45μs (23.3% faster)
    codeflash_output = get_cost_for_web_search_request("gemini", usage, model_info); result2 = codeflash_output # 2.52μs -> 1.77μs (42.1% faster)

def test_anthropic_determinism():
    """Anthropic: Multiple calls with same input should return same result"""
    model_info = ModelInfo({
        "search_context_cost_per_query": SearchContextCostPerQuery({"search_context_size_medium": 0.05})
    })
    usage = Usage(server_tool_use=ServerToolUseWrapper(web_search_requests=7))
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); result1 = codeflash_output # 5.39μs -> 4.33μs (24.7% faster)
    codeflash_output = get_cost_for_web_search_request("anthropic", usage, model_info); result2 = codeflash_output # 2.46μs -> 1.79μs (37.4% faster)

def test_vertex_ai_determinism():
    """Vertex AI: Multiple calls with same input should return same result"""
    usage = Usage(prompt_tokens_details=PromptTokensDetailsWrapper(web_search_requests=1))
    model_info = ModelInfo()
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info); result1 = codeflash_output # 5.27μs -> 4.30μs (22.5% faster)
    codeflash_output = get_cost_for_web_search_request("vertex_ai_gemini", usage, model_info); result2 = codeflash_output # 2.45μs -> 1.86μs (31.6% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-get_cost_for_web_search_request-mhdyhzdb and push.

Codeflash Static Badge

The optimization achieves a **25% speedup** through several key improvements:

**Import optimization**: The most significant gain comes from moving `PromptTokensDetailsWrapper` and `SearchContextCostPerQuery` imports to module-level instead of function-level. The line profiler shows these imports taking 27-34% of function execution time in the original code. Moving them eliminates this per-call overhead.

**Simplified control flow**: 
- Removed unnecessary intermediate variable assignments (like `total_cost` in gemini, `makes_web_search_request` in vertex_ai)
- Combined early return conditions into single compound `if` statements in the anthropic function
- Changed `if cost_per_web_search_request is None or cost_per_web_search_request == 0.0:` to the more Pythonic `if not cost_per_web_search_request:` which is faster for truthiness checks

**Direct return expressions**: Instead of storing results in intermediate variables and then returning them, the optimized code returns calculated values directly, reducing memory allocations and variable lookups.

These optimizations are particularly effective for the test cases shown because:
- **Frequent calls with valid data** (basic test cases) benefit most from eliminated import overhead
- **Early exit scenarios** (edge cases with None values) see larger relative gains (up to 39% faster) due to simplified branching
- **Determinism tests** show the import optimization clearly - second calls are 37-42% faster since imports are cached

The optimizations maintain identical functionality while reducing Python interpreter overhead through fewer operations per function call.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 30, 2025 21:48
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant