Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Oct 30, 2025

📄 9% (0.09x) speedup for get_error_message in litellm/litellm_core_utils/exception_mapping_utils.py

⏱️ Runtime : 278 microseconds 256 microseconds (best of 65 runs)

📝 Explanation and details

The optimized code achieves an 8% speedup by eliminating two performance bottlenecks in the original implementation:

Key optimizations:

  1. Removed unnecessary try/except block: The original code wrapped the entire function in a try/except that caught all exceptions and returned None. Since the internal logic doesn't perform operations that would raise exceptions under normal circumstances, this exception handling was pure overhead.

  2. Eliminated redundant hasattr() check: The original code used hasattr(error_obj, "body") followed by getattr(error_obj, "body"), performing the attribute lookup twice. The optimized version uses getattr(error_obj, "body", None) with a default value, consolidating both operations into a single, more efficient call.

Why these changes improve performance:

  • Exception handling overhead: Python's try/except blocks have setup costs even when no exceptions are raised. Removing unnecessary exception handling reduces this overhead.
  • Reduced attribute lookups: hasattr() internally performs the same attribute lookup as getattr(), so the original code was doing redundant work. The optimized version cuts this in half.

Test case performance patterns:

The optimization shows consistent improvements across most test cases (7-26% faster), with particularly strong gains on:

  • Large dictionary operations (22-26% faster)
  • Objects with valid body attributes containing message keys (13-19% faster)
  • Edge cases with various data types (10-20% faster)

The few cases showing slight slowdowns (objects without body attributes) are minimal and outweighed by the overall performance gains across the broader use case spectrum.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 1054 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 80.0%
🌀 Generated Regression Tests and Runtime
from typing import Optional

# imports
import pytest
from litellm.litellm_core_utils.exception_mapping_utils import \
    get_error_message

# ----------------------------------------
# Unit tests for get_error_message
# ----------------------------------------

# Helper class to simulate error objects
class DummyError:
    def __init__(self, body=None):
        self.body = body

# 1. Basic Test Cases

def test_none_input_returns_none():
    # Test with None as input
    codeflash_output = get_error_message(None) # 468ns -> 432ns (8.33% faster)

def test_body_with_message_returns_message():
    # Test with a typical error object with body containing message
    err = DummyError(body={"message": "This is an error."})
    codeflash_output = get_error_message(err) # 757ns -> 703ns (7.68% faster)

def test_body_with_empty_message_returns_empty_string():
    # Test with body containing an empty string message
    err = DummyError(body={"message": ""})
    codeflash_output = get_error_message(err) # 797ns -> 721ns (10.5% faster)

def test_body_with_no_message_key_returns_none():
    # Test with body as dict but no 'message' key
    err = DummyError(body={"code": "foo", "type": "bar"})
    codeflash_output = get_error_message(err) # 790ns -> 697ns (13.3% faster)

def test_body_with_message_none_returns_none():
    # Test with body containing message=None
    err = DummyError(body={"message": None})
    codeflash_output = get_error_message(err) # 772ns -> 695ns (11.1% faster)

# 2. Edge Test Cases

def test_error_obj_without_body_attr_returns_none():
    # Test with object that does not have 'body' attribute
    class NoBody:
        pass
    err = NoBody()
    codeflash_output = get_error_message(err) # 654ns -> 767ns (14.7% slower)

def test_error_obj_body_is_not_dict_returns_none():
    # Test with 'body' attribute that is not a dict (e.g., string)
    err = DummyError(body="not a dict")
    codeflash_output = get_error_message(err) # 710ns -> 657ns (8.07% faster)

def test_error_obj_body_is_list_returns_none():
    # Test with 'body' attribute as a list
    err = DummyError(body=["message", "error"])
    codeflash_output = get_error_message(err) # 716ns -> 650ns (10.2% faster)

def test_error_obj_body_is_int_returns_none():
    # Test with 'body' attribute as an int
    err = DummyError(body=1234)
    codeflash_output = get_error_message(err) # 712ns -> 637ns (11.8% faster)

def test_error_obj_body_dict_message_is_int():
    # Test with body dict where message is an int
    err = DummyError(body={"message": 42})
    codeflash_output = get_error_message(err) # 839ns -> 741ns (13.2% faster)

def test_error_obj_body_dict_message_is_list():
    # Test with body dict where message is a list
    err = DummyError(body={"message": ["error", "details"]})
    codeflash_output = get_error_message(err) # 774ns -> 739ns (4.74% faster)

def test_error_obj_body_dict_message_is_dict():
    # Test with body dict where message is a dict
    err = DummyError(body={"message": {"detail": "bad"}})
    codeflash_output = get_error_message(err) # 801ns -> 704ns (13.8% faster)

def test_error_obj_body_dict_with_extra_keys():
    # Test with body dict with many keys, message present
    err = DummyError(body={"message": "found", "code": 123, "foo": "bar"})
    codeflash_output = get_error_message(err) # 767ns -> 608ns (26.2% faster)

def test_error_obj_is_builtin_type_returns_none():
    # Test with error_obj as a built-in type (e.g., int)
    codeflash_output = get_error_message(123) # 562ns -> 686ns (18.1% slower)
    codeflash_output = get_error_message("error") # 469ns -> 358ns (31.0% faster)
    codeflash_output = get_error_message(["error"]) # 251ns -> 216ns (16.2% faster)
    codeflash_output = get_error_message({"body": {"message": "foo"}}) # 232ns -> 240ns (3.33% slower)

def test_error_obj_body_is_none_returns_none():
    # Test with body attribute set to None
    err = DummyError(body=None)
    codeflash_output = get_error_message(err) # 689ns -> 585ns (17.8% faster)

def test_error_obj_body_dict_message_key_missing_among_many():
    # Test with body dict with many keys, but no 'message'
    err = DummyError(body={"code": "foo", "param": "bar", "type": "baz"})
    codeflash_output = get_error_message(err) # 788ns -> 674ns (16.9% faster)

def test_error_obj_body_dict_message_is_falsey_values():
    # Test with body dict where message is False or 0
    err_false = DummyError(body={"message": False})
    err_zero = DummyError(body={"message": 0})
    codeflash_output = get_error_message(err_false) # 800ns -> 704ns (13.6% faster)
    codeflash_output = get_error_message(err_zero) # 443ns -> 386ns (14.8% faster)

def test_error_obj_body_dict_message_is_empty_container():
    # Test with body dict where message is empty list or dict
    err_list = DummyError(body={"message": []})
    err_dict = DummyError(body={"message": {}})
    codeflash_output = get_error_message(err_list) # 738ns -> 676ns (9.17% faster)
    codeflash_output = get_error_message(err_dict) # 423ns -> 376ns (12.5% faster)

def test_error_obj_body_dict_message_is_bytes():
    # Test with body dict where message is bytes
    err = DummyError(body={"message": b"bytes"})
    codeflash_output = get_error_message(err) # 778ns -> 649ns (19.9% faster)

def test_error_obj_body_dict_message_is_callable():
    # Test with body dict where message is a function
    def f(): return 1
    err = DummyError(body={"message": f})
    codeflash_output = get_error_message(err) # 728ns -> 610ns (19.3% faster)

# 3. Large Scale Test Cases

def test_large_body_dict_with_message_key():
    # Test with a large body dict (999 keys), message near the end
    body = {f"key{i}": i for i in range(998)}
    body["message"] = "large error"
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 816ns -> 667ns (22.3% faster)

def test_large_body_dict_without_message_key():
    # Test with a large body dict (999 keys), no message key
    body = {f"key{i}": i for i in range(999)}
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 778ns -> 719ns (8.21% faster)



def test_large_error_obj_with_body_as_large_dict_of_non_messages():
    # Test with body as a dict with many non-message keys, message is missing
    body = {f"foo{i}": i for i in range(999)}
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 1.04μs -> 938ns (11.2% faster)

def test_large_error_obj_with_body_as_large_dict_and_message_none():
    # Test with body as a dict with many keys, message=None
    body = {f"foo{i}": i for i in range(998)}
    body["message"] = None
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 883ns -> 799ns (10.5% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
from typing import Optional

# imports
import pytest  # used for our unit tests
from litellm.litellm_core_utils.exception_mapping_utils import \
    get_error_message

# unit tests

# Helper class to simulate error objects with arbitrary attributes
class DummyError:
    def __init__(self, **kwargs):
        for k, v in kwargs.items():
            setattr(self, k, v)

# ------------------- BASIC TEST CASES -------------------

def test_none_input_returns_none():
    # Test passing None returns None
    codeflash_output = get_error_message(None) # 567ns -> 528ns (7.39% faster)

def test_body_with_message_returns_message():
    # Test when error_obj.body is a dict with a 'message' key
    msg = "This is an error message"
    err = DummyError(body={"message": msg})
    codeflash_output = get_error_message(err) # 916ns -> 778ns (17.7% faster)

def test_body_with_message_and_other_keys():
    # Test when error_obj.body has other keys besides 'message'
    msg = "Another error"
    err = DummyError(body={"message": msg, "type": "invalid_request_error"})
    codeflash_output = get_error_message(err) # 843ns -> 721ns (16.9% faster)

def test_body_without_message_returns_none():
    # Test when error_obj.body is a dict without 'message'
    err = DummyError(body={"type": "invalid_request_error"})
    codeflash_output = get_error_message(err) # 750ns -> 721ns (4.02% faster)

def test_body_is_not_dict_returns_none():
    # Test when error_obj.body exists but is not a dict
    err = DummyError(body="not a dict")
    codeflash_output = get_error_message(err) # 688ns -> 622ns (10.6% faster)

def test_no_body_attribute_returns_none():
    # Test when error_obj does not have a 'body' attribute
    err = DummyError(message="Error code: 400")
    codeflash_output = get_error_message(err) # 569ns -> 610ns (6.72% slower)

# ------------------- EDGE TEST CASES -------------------

def test_body_is_empty_dict_returns_none():
    # Test when error_obj.body is an empty dict
    err = DummyError(body={})
    codeflash_output = get_error_message(err) # 828ns -> 761ns (8.80% faster)

def test_body_is_none_returns_none():
    # Test when error_obj.body is None
    err = DummyError(body=None)
    codeflash_output = get_error_message(err) # 690ns -> 596ns (15.8% faster)

def test_message_is_empty_string():
    # Test when error_obj.body['message'] is an empty string
    err = DummyError(body={"message": ""})
    codeflash_output = get_error_message(err) # 822ns -> 722ns (13.9% faster)

def test_message_is_none():
    # Test when error_obj.body['message'] is None
    err = DummyError(body={"message": None})
    codeflash_output = get_error_message(err) # 791ns -> 700ns (13.0% faster)

def test_body_is_list_returns_none():
    # Test when error_obj.body is a list
    err = DummyError(body=["message"])
    codeflash_output = get_error_message(err) # 668ns -> 607ns (10.0% faster)

def test_body_is_int_returns_none():
    # Test when error_obj.body is an int
    err = DummyError(body=123)
    codeflash_output = get_error_message(err) # 657ns -> 578ns (13.7% faster)

def test_body_is_custom_object_returns_none():
    # Test when error_obj.body is a custom object (not dict)
    class Custom:
        pass
    err = DummyError(body=Custom())
    codeflash_output = get_error_message(err) # 898ns -> 790ns (13.7% faster)

def test_body_dict_with_non_string_message():
    # Test when error_obj.body['message'] is an int
    err = DummyError(body={"message": 42})
    codeflash_output = get_error_message(err) # 756ns -> 685ns (10.4% faster)

def test_error_obj_is_not_object_returns_none():
    # Test when error_obj is a dict, not an object with attributes
    error_obj = {"body": {"message": "dict error"}}
    codeflash_output = get_error_message(error_obj) # 686ns -> 652ns (5.21% faster)

def test_error_obj_is_string_returns_none():
    # Test when error_obj is a string
    error_obj = "some error string"
    codeflash_output = get_error_message(error_obj) # 679ns -> 600ns (13.2% faster)

def test_error_obj_is_int_returns_none():
    # Test when error_obj is an int
    error_obj = 12345
    codeflash_output = get_error_message(error_obj) # 493ns -> 598ns (17.6% slower)

def test_body_dict_with_extra_nested_message():
    # Test when error_obj.body['message'] is a dict (unusual, but possible)
    nested_msg = {"detail": "nested message"}
    err = DummyError(body={"message": nested_msg})
    codeflash_output = get_error_message(err) # 816ns -> 685ns (19.1% faster)

def test_body_dict_with_multiple_keys_and_message():
    # Test when error_obj.body has many keys including 'message'
    msg = "multi-key error"
    err = DummyError(body={"message": msg, "foo": "bar", "baz": 123})
    codeflash_output = get_error_message(err) # 705ns -> 716ns (1.54% slower)

# ------------------- LARGE SCALE TEST CASES -------------------

def test_large_body_dict_with_message():
    # Test with a large body dict (under 1000 keys)
    msg = "large error message"
    body = {f"key_{i}": i for i in range(999)}
    body["message"] = msg
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 762ns -> 812ns (6.16% slower)

def test_large_body_dict_without_message():
    # Test with a large body dict (no 'message' key)
    body = {f"key_{i}": i for i in range(999)}
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 830ns -> 817ns (1.59% faster)

def test_large_number_of_error_objects():
    # Test many error objects in a loop
    msg = "bulk error"
    for i in range(1000):
        err = DummyError(body={"message": msg + str(i)})
        codeflash_output = get_error_message(err) # 239μs -> 221μs (8.37% faster)

def test_large_body_dict_message_at_start():
    # Test with a large dict, 'message' key at the start
    body = {"message": "start error"}
    body.update({f"k{i}": i for i in range(998)})
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 869ns -> 751ns (15.7% faster)

def test_large_body_dict_message_at_end():
    # Test with a large dict, 'message' key at the end
    body = {f"k{i}": i for i in range(998)}
    body["message"] = "end error"
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 755ns -> 667ns (13.2% faster)

def test_large_body_dict_message_is_long_string():
    # Test with a very long string as the message
    long_msg = "A" * 1000
    body = {"message": long_msg}
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 710ns -> 683ns (3.95% faster)

def test_large_body_dict_message_is_large_list():
    # Test with a large list as the message value
    large_list = [i for i in range(999)]
    body = {"message": large_list}
    err = DummyError(body=body)
    codeflash_output = get_error_message(err) # 720ns -> 706ns (1.98% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-get_error_message-mhdkt83p and push.

Codeflash Static Badge

The optimized code achieves an **8% speedup** by eliminating two performance bottlenecks in the original implementation:

**Key optimizations:**

1. **Removed unnecessary try/except block**: The original code wrapped the entire function in a try/except that caught all exceptions and returned None. Since the internal logic doesn't perform operations that would raise exceptions under normal circumstances, this exception handling was pure overhead.

2. **Eliminated redundant hasattr() check**: The original code used `hasattr(error_obj, "body")` followed by `getattr(error_obj, "body")`, performing the attribute lookup twice. The optimized version uses `getattr(error_obj, "body", None)` with a default value, consolidating both operations into a single, more efficient call.

**Why these changes improve performance:**

- **Exception handling overhead**: Python's try/except blocks have setup costs even when no exceptions are raised. Removing unnecessary exception handling reduces this overhead.
- **Reduced attribute lookups**: `hasattr()` internally performs the same attribute lookup as `getattr()`, so the original code was doing redundant work. The optimized version cuts this in half.

**Test case performance patterns:**

The optimization shows consistent improvements across most test cases (7-26% faster), with particularly strong gains on:
- Large dictionary operations (22-26% faster)
- Objects with valid body attributes containing message keys (13-19% faster)  
- Edge cases with various data types (10-20% faster)

The few cases showing slight slowdowns (objects without body attributes) are minimal and outweighed by the overall performance gains across the broader use case spectrum.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 30, 2025 15:25
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant