Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(proxy docker) add sentry sdk to litellm docker #5965

Merged
merged 1 commit into from
Sep 29, 2024

Conversation

ishaan-jaff
Copy link
Contributor

Title

Relevant issues

Type

πŸ†• New Feature
πŸ› Bug Fix
🧹 Refactoring
πŸ“– Documentation
πŸš„ Infrastructure
βœ… Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

Copy link

vercel bot commented Sep 28, 2024

The latest updates on your projects. Learn more about Vercel for Git β†—οΈŽ

Name Status Preview Comments Updated (UTC)
litellm βœ… Ready (Inspect) Visit Preview πŸ’¬ Add feedback Sep 28, 2024 10:26pm

@ishaan-jaff ishaan-jaff merged commit e70d1a2 into main Sep 29, 2024
17 checks passed
krrishdholakia added a commit that referenced this pull request Sep 29, 2024
* fix(batch_redis_get.py): handle custom namespace

Fix #5917

* fix(litellm_logging.py): fix linting error

* refactor(test_proxy_utils.py): place at root level test folder

* refactor: move all testing to top-level of repo

Closes #486

* refactor: fix imports

* refactor(test_stream_chunk_builder.py): fix import

* build(config.yml): fix build_and_test part of tests

* fix(parallel_request_limiter.py): return remaining tpm/rpm in openai-compatible way

Fixes #5957

* fix(return-openai-compatible-headers): v0 is openai, azure, anthropic

Fixes #5957

* fix(utils.py): guarantee openai-compatible headers always exist in response

Fixes #5957

* fix(azure): return response headers for sync embedding calls

* fix(router.py): handle setting response headers during retries

* fix(utils.py): fix updating hidden params

* fix(router.py): skip setting model_group response headers for now

current implementation increases redis cache calls by 3x

* docs(reliability.md): add tutorial on setting wildcard models as fallbacks

* fix(caching.py): cleanup print_stack()

* fix(parallel_request_limiter.py): make sure hidden params is dict before dereferencing

* test: refactor test

* test: run test first

* fix(parallel_request_limiter.py): only update hidden params, don't set new (can lead to errors for responses where attribute can't be set)

* (perf improvement proxy) use one redis set cache to update spend in db (30-40% perf improvement)  (#5960)

* use one set op to update spend in db

* fix test_team_cache_update_called

* fix redis async_set_cache_pipeline when empty list passed to it (#5962)

* [Feat Proxy] Allow using hypercorn for http v2  (#5950)

* use run_hypercorn

* add docs on using hypercorn

* docs clean up langfuse.md

* (feat proxy prometheus) track virtual key, key alias, error code, error code class on prometheus  (#5968)

* track api key and team in prom latency metric

* add test for latency metric

* test prometheus success metrics for latency

* track team and key labels for deployment failures

* add test for litellm_deployment_failure_responses_total

* fix checks for premium user on prometheus

* log_success_fallback_event and log_failure_fallback_event

* log original_exception in log_success_fallback_event

* track key, team and exception status and class on fallback metrics

* use get_standard_logging_metadata

* fix import error

* track litellm_deployment_successful_fallbacks

* add test test_proxy_fallback_metrics

* add log log_success_fallback_event

* fix test prometheus

* (proxy prometheus) track api key and team in latency metrics (#5966)

* track api key and team in prom latency metric

* add test for latency metric

* test prometheus success metrics for latency

* (feat prometheus proxy) track remaining team and key alias in deployment failure metrics (#5967)

* track api key and team in prom latency metric

* add test for latency metric

* test prometheus success metrics for latency

* track team and key labels for deployment failures

* add test for litellm_deployment_failure_responses_total

* bump: version 1.48.5 β†’ 1.48.6

* fix sso sign in tests

* ci/cd run again

* add sentry sdk to litellm docker (#5965)

* ci/cd run again

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant