Krish Dholakia
9432d1a865
Merge pull request #9357 from BerriAI/litellm_dev_03_18_2025_p2
...
fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis
2025-03-19 15:45:10 -07:00
Krrish Dholakia
ef008138a3
feat(base_routing_strategy.py): handle updating in-memory keys
2025-03-18 19:44:04 -07:00
Krrish Dholakia
1328afe612
fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis
2025-03-18 19:09:53 -07:00
Krrish Dholakia
a34cc2031d
fix(response_metadata.py): log the litellm_model_name
...
make it easier to track the model sent to the provider
2025-03-18 17:46:33 -07:00
Krrish Dholakia
39ac9e3eca
fix(lowest_tpm_rpm_v2.py): fix updating limits
2025-03-18 17:10:17 -07:00
Krrish Dholakia
cfe94c86cc
fix(lowest_tpm_rpm_routing_v2.py): fix deployment update to use correct keys
2025-03-18 16:28:37 -07:00
Krrish Dholakia
9bf6028f14
fix(lowest_tpm_rpm_v2.py): update key to use model name
2025-03-18 16:19:47 -07:00
Krish Dholakia
69a6da4727
Litellm dev 01 30 2025 p2 ( #8134 )
...
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >
makes it consistent
* test(test_custom_guardrails.py): add more unit testing on default on guardrails
ensure it runs if user sent guardrail list is empty
* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails
* refactor(litellm_logging.py): refactor no-log to helper util
allows for more consistent behavior
* feat(litellm_logging.py): add event hook to verbose logs
* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected
* docs(logging.md): document how to disable 'no-log' param
* test: fix test to handle feb
* test: cleanup old bedrock model
* fix: fix router check
2025-01-30 22:18:53 -08:00
Krish Dholakia
d88de268dd
Litellm dev 12 26 2024 p4 ( #7439 )
...
* fix(model_dashboard.tsx): support setting model_info params - e.g. mode on ui
Closes https://github.com/BerriAI/litellm/issues/5270
* fix(lowest_tpm_rpm_v2.py): deployment rpm over limit check
fixes selection error when getting potential deployments below known tpm/rpm limit
Fixes https://github.com/BerriAI/litellm/issues/7395
* fix(test_tpm_rpm_routing_v2.py): add unit test for https://github.com/BerriAI/litellm/issues/7395
* fix(lowest_tpm_rpm_v2.py): fix tpm key name in dict post rpm update
* test: rename test to run earlier
* test: skip flaky test
2024-12-27 12:01:42 -08:00
Ishaan Jaff
61b636c20d
[Bug Fix]: Errors in LiteLLM When Using Embeddings Model with Usage-Based Routing ( #7390 )
...
* use slp for usage based routing v2
* update error msg
* fix usage based routing v2
* test_tpm_rpm_updated
* fix unused imports
* fix unused imports
2024-12-23 17:42:24 -08:00
Krish Dholakia
db59e08958
Litellm dev 12 23 2024 p1 ( #7383 )
...
* feat(guardrails_endpoint.py): new `/guardrails/list` endpoint
Allow users to view what the available guardrails are
* docs: document new `/guardrails/list` endpoint
* docs(enterprise.md): update docs
* fix(openai/transcription/handler.py): support cost tracking on vtt + srt formats
* fix(openai/transcriptions/handler.py): default to 'verbose_json' response format if 'text' or 'json' response_format received. ensures 'duration' param is received for all audio transcription requests
* fix: fix linting errors
* fix: remove unused import
2024-12-23 16:33:31 -08:00
Ishaan Jaff
c7f14e936a
(code quality) run ruff rule to ban unused imports ( #7313 )
...
* remove unused imports
* fix AmazonConverseConfig
* fix test
* fix import
* ruff check fixes
* test fixes
* fix testing
* fix imports
2024-12-19 12:33:42 -08:00
Ishaan Jaff
2fb2801eb4
(Refactor) Code Quality improvement - stop redefining LiteLLMBase ( #7147 )
...
* fix stop redefining LiteLLMBase
* use better name for base pydantic obj
2024-12-10 15:49:01 -08:00
Krish Dholakia
695f48a8f1
fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check ( #6577 )
...
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check
* fix(lowest_tpm_rpm_v2.py): return headers in correct format
* test: update test
* build(deps): bump cookie and express in /docs/my-website (#6566 )
Bumps [cookie](https://github.com/jshttp/cookie ) and [express](https://github.com/expressjs/express ). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases )
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1 )
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases )
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md )
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1 )
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554 )
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534 )
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588 )
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573 )
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* test: remove eol model
* fix(proxy_server.py): fix db config loading logic
* fix(proxy_server.py): fix order of config / db updates, to ensure fields not overwritten
* test: skip test if required env var is missing
* test: fix test
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
2024-11-05 22:03:44 +05:30
Krish Dholakia
4f8a3fd4cf
redis otel tracing + async support for latency routing ( #6452 )
...
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* refactor: pass parent_otel_span for redis caching calls in router
allows for more observability into what calls are causing latency issues
* test: update tests with new params
* refactor: ensure e2e otel tracing for router
* refactor(router.py): add more otel tracing acrosss router
catch all latency issues for router requests
* fix: fix linting error
* fix(router.py): fix linting error
* fix: fix test
* test: fix tests
* fix(dual_cache.py): pass ttl to redis cache
* fix: fix param
2024-10-28 21:52:12 -07:00
Ishaan Jaff
610974b4fc
(code quality) add ruff check PLR0915 for too-many-statements
( #6309 )
...
* ruff add PLR0915
* add noqa for PLR0915
* fix noqa
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
2024-10-18 15:36:49 +05:30
Ishaan Jaff
4d1b4beb3d
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
2024-10-14 16:34:01 +05:30
Krish Dholakia
d57be47b0f
Litellm ruff linting enforcement ( #5992 )
...
* ci(config.yml): add a 'check_code_quality' step
Addresses https://github.com/BerriAI/litellm/issues/5991
* ci(config.yml): check why circle ci doesn't pick up this test
* ci(config.yml): fix to run 'check_code_quality' tests
* fix(__init__.py): fix unprotected import
* fix(__init__.py): don't remove unused imports
* build(ruff.toml): update ruff.toml to ignore unused imports
* fix: fix: ruff + pyright - fix linting + type-checking errors
* fix: fix linting errors
* fix(lago.py): fix module init error
* fix: fix linting errors
* ci(config.yml): cd into correct dir for checks
* fix(proxy_server.py): fix linting error
* fix(utils.py): fix bare except
causes ruff linting errors
* fix: ruff - fix remaining linting errors
* fix(clickhouse.py): use standard logging object
* fix(__init__.py): fix unprotected import
* fix: ruff - fix linting errors
* fix: fix linting errors
* ci(config.yml): cleanup code qa step (formatting handled in local_testing)
* fix(_health_endpoints.py): fix ruff linting errors
* ci(config.yml): just use ruff in check_code_quality pipeline for now
* build(custom_guardrail.py): include missing file
* style(embedding_handler.py): fix ruff check
2024-10-01 19:44:20 -04:00
Krish Dholakia
8039b95aaf
LiteLLM Minor Fixes & Improvements (09/21/2024) ( #5819 )
...
* fix(router.py): fix error message
* Litellm disable keys (#5814 )
* build(schema.prisma): allow blocking/unblocking keys
Fixes https://github.com/BerriAI/litellm/issues/5328
* fix(key_management_endpoints.py): fix pop
* feat(auth_checks.py): allow admin to enable/disable virtual keys
Closes https://github.com/BerriAI/litellm/issues/5328
* docs(vertex.md): add auth section for vertex ai
Addresses - https://github.com/BerriAI/litellm/issues/5768#issuecomment-2365284223
* build(model_prices_and_context_window.json): show which models support prompt_caching
Closes https://github.com/BerriAI/litellm/issues/5776
* fix(router.py): allow setting default priority for requests
* fix(router.py): add 'retry-after' header for concurrent request limit errors
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(router.py): correctly raise and use retry-after header from azure+openai
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(user_api_key_auth.py): fix valid token being none
* fix(auth_checks.py): fix model dump for cache management object
* fix(user_api_key_auth.py): pass prisma_client to obj
* test(test_otel.py): update test for new key check
* test: fix test
2024-09-21 18:51:53 -07:00
Krrish Dholakia
61f4b71ef7
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 09:22:47 -07:00
Krrish Dholakia
6cca5612d2
refactor: replace 'traceback.print_exc()' with logging library
...
allows error logs to be in json format for otel logging
2024-06-06 13:47:43 -07:00
sumanth
71e0294485
addressed comments
2024-05-14 10:05:19 +05:30
SUMANTH
978672a56d
Merge branch 'BerriAI:main' into usage-based-routing-ttl-on-cache
2024-05-14 09:08:01 +05:30
Krrish Dholakia
4a3b084961
feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls
2024-05-11 13:43:08 -07:00
sumanth
3bc6b5d119
usage-based-routing-ttl-on-cache
2024-05-03 10:50:45 +05:30
Krrish Dholakia
91971fa9e0
feat(router.py): add 'get_model_info' helper function to get the model info for a specific model, based on it's id
2024-05-02 17:53:09 -07:00
Krrish Dholakia
020b175ef4
fix(lowest_tpm_rpm_v2.py): skip if item_tpm is None
2024-04-29 21:34:25 -07:00
Krish Dholakia
32534b5e91
Merge pull request #3358 from sumanth13131/usage-based-routing-RPM-fix
...
usage based routing RPM count fix
2024-04-29 16:45:25 -07:00
Krrish Dholakia
a978f2d881
fix(lowest_tpm_rpm_v2.py): shuffle deployments with same tpm values
2024-04-29 15:23:47 -07:00
Krrish Dholakia
f10a066d36
fix(lowest_tpm_rpm_v2.py): add more detail to 'No deployments available' error message
2024-04-29 15:04:37 -07:00
sumanth
89e655c79e
usage based routing RPM count fix
2024-04-30 00:29:38 +05:30
Krrish Dholakia
9379e3d047
fix(lowest_tpm_rpm_v2.py): use a combined tpm+rpm query in async get cache, to reduce redis client calls in high traffic
2024-04-20 16:13:11 -07:00
Krrish Dholakia
01a1a8f731
fix(caching.py): dual cache async_batch_get_cache fix + testing
...
this fixes a bug in usage-based-routing-v2 which was caused b/c of how the result was being returned from dual cache async_batch_get_cache. it also adds unit testing for that function (and it's sync equivalent)
2024-04-19 15:03:25 -07:00
Krrish Dholakia
3b9e2a58e2
fix(lowest_tpm_rpm_v2.py): ensure backwards compatibility for python 3.8
2024-04-18 21:42:35 -07:00
Krrish Dholakia
81573b2dd9
fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2
2024-04-18 21:38:00 -07:00
Krrish Dholakia
a05f148c17
fix(tpm_rpm_routing_v2.py): fix tpm rpm routing
2024-04-18 20:01:22 -07:00
Krrish Dholakia
8179596ebc
fix(lowest_tpm_rpm_v2.py): don't fail calls if redis fails to connect
2024-04-12 19:36:59 -07:00
Krrish Dholakia
ea1574c160
test(test_openai_endpoints.py): add concurrency testing for user defined rate limits on proxy
2024-04-12 18:56:13 -07:00
Krrish Dholakia
c03b0bbb24
fix(router.py): support pre_call_rpm_check for lowest_tpm_rpm_v2 routing
...
have routing strategies expose an ‘update rpm’ function; for checking + updating rpm pre call
2024-04-12 18:25:14 -07:00
Krrish Dholakia
37ac17aebd
fix(router.py): fix datetime object
2024-04-10 17:55:24 -07:00
Krrish Dholakia
2531701a2a
fix(router.py): make get_cooldown_deployment logic async
2024-04-10 16:57:01 -07:00
Krrish Dholakia
a47a719caa
fix(router.py): generate consistent model id's
...
having the same id for a deployment, lets redis usage caching work across multiple instances
2024-04-10 15:23:57 -07:00
Krrish Dholakia
180cf9bd5c
feat(lowest_tpm_rpm_v2.py): move to using redis.incr and redis.mget for getting model usage from redis
...
makes routing work across multiple instances
2024-04-10 14:56:23 -07:00