redis otel tracing + async support for latency routing (#6452)

* docs(exception_mapping.md): add missing exception types

Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183

* fix(main.py): register custom model pricing with specific key

Ensure custom model pricing is registered to the specific model+provider key combination

* test: make testing more robust for custom pricing

* fix(redis_cache.py): instrument otel logging for sync redis calls

ensures complete coverage for all redis cache calls

* refactor: pass parent_otel_span for redis caching calls in router

allows for more observability into what calls are causing latency issues

* test: update tests with new params

* refactor: ensure e2e otel tracing for router

* refactor(router.py): add more otel tracing acrosss router

catch all latency issues for router requests

* fix: fix linting error

* fix(router.py): fix linting error

* fix: fix test

* test: fix tests

* fix(dual_cache.py): pass ttl to redis cache

* fix: fix param
This commit is contained in:
Krish Dholakia 2024-10-28 21:52:12 -07:00 committed by GitHub
parent d9e7818e6b
commit 4f8a3fd4cf
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
25 changed files with 559 additions and 147 deletions

View file

@ -242,12 +242,12 @@ async def test_single_deployment_no_cooldowns_test_prod_mock_completion_calls():
pass
cooldown_list = await _async_get_cooldown_deployments(
litellm_router_instance=router
litellm_router_instance=router, parent_otel_span=None
)
assert len(cooldown_list) == 0
healthy_deployments, _ = await router._async_get_healthy_deployments(
model="gpt-3.5-turbo"
model="gpt-3.5-turbo", parent_otel_span=None
)
print("healthy_deployments: ", healthy_deployments)
@ -351,7 +351,7 @@ async def test_high_traffic_cooldowns_all_healthy_deployments():
print("model_stats: ", model_stats)
cooldown_list = await _async_get_cooldown_deployments(
litellm_router_instance=router
litellm_router_instance=router, parent_otel_span=None
)
assert len(cooldown_list) == 0
@ -449,7 +449,7 @@ async def test_high_traffic_cooldowns_one_bad_deployment():
print("model_stats: ", model_stats)
cooldown_list = await _async_get_cooldown_deployments(
litellm_router_instance=router
litellm_router_instance=router, parent_otel_span=None
)
assert len(cooldown_list) == 1
@ -550,7 +550,7 @@ async def test_high_traffic_cooldowns_one_rate_limited_deployment():
print("model_stats: ", model_stats)
cooldown_list = await _async_get_cooldown_deployments(
litellm_router_instance=router
litellm_router_instance=router, parent_otel_span=None
)
assert len(cooldown_list) == 1