Commit graph

163 commits

Author SHA1 Message Date
Krrish Dholakia
3560f0ef2c refactor: move all testing to top-level of repo
Closes https://github.com/BerriAI/litellm/issues/486
2024-09-28 21:08:14 -07:00
Krish Dholakia
8039b95aaf
LiteLLM Minor Fixes & Improvements (09/21/2024) (#5819)
* fix(router.py): fix error message

* Litellm disable keys (#5814)

* build(schema.prisma): allow blocking/unblocking keys

Fixes https://github.com/BerriAI/litellm/issues/5328

* fix(key_management_endpoints.py): fix pop

* feat(auth_checks.py): allow admin to enable/disable virtual keys

Closes https://github.com/BerriAI/litellm/issues/5328

* docs(vertex.md): add auth section for vertex ai

Addresses - https://github.com/BerriAI/litellm/issues/5768#issuecomment-2365284223

* build(model_prices_and_context_window.json): show which models support prompt_caching

Closes https://github.com/BerriAI/litellm/issues/5776

* fix(router.py): allow setting default priority for requests

* fix(router.py): add 'retry-after' header for concurrent request limit errors

Fixes https://github.com/BerriAI/litellm/issues/5783

* fix(router.py): correctly raise and use retry-after header from azure+openai

Fixes https://github.com/BerriAI/litellm/issues/5783

* fix(user_api_key_auth.py): fix valid token being none

* fix(auth_checks.py): fix model dump for cache management object

* fix(user_api_key_auth.py): pass prisma_client to obj

* test(test_otel.py): update test for new key check

* test: fix test
2024-09-21 18:51:53 -07:00
Krish Dholakia
6051086322
test: replace gpt-3.5-turbo-0613 (deprecated model) (#5794) 2024-09-19 15:39:37 -07:00
Ishaan Jaff
c8d15544c8
[Fix] Router cooldown logic - use % thresholds instead of allowed fails to cooldown deployments (#5698)
* move cooldown logic to it's own helper

* add new track deployment metrics folder

* increment success, fails for deployment in current minute

* fix cooldown logic

* fix test_aaarouter_dynamic_cooldown_message_retry_time

* fix test_single_deployment_no_cooldowns_test_prod_mock_completion_calls

* clean up get from deployment test

* fix _async_get_healthy_deployments

* add mock InternalServerError

* test deployment failing 25% requests

* add test_high_traffic_cooldowns_one_bad_deployment

* fix vertex load test

* add test for rate limit error models in cool down

* change default cooldown time

* fix cooldown message time

* fix cooldown on 429 error

* fix doc string for _should_cooldown_deployment

* fix sync cooldown logic router
2024-09-14 18:01:19 -07:00
Krish Dholakia
60709a0753
LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689)
* refactor: cleanup unused variables + fix pyright errors

* feat(health_check.py): Closes https://github.com/BerriAI/litellm/issues/5686

* fix(o1_reasoning.py): add stricter check for o-1 reasoning model

* refactor(mistral/): make it easier to see mistral transformation logic

* fix(openai.py): fix openai o-1 model param mapping

Fixes https://github.com/BerriAI/litellm/issues/5685

* feat(main.py): infer finetuned gemini model from base model

Fixes https://github.com/BerriAI/litellm/issues/5678

* docs(vertex.md): update docs to call finetuned gemini models

* feat(proxy_server.py): allow admin to hide proxy model aliases

Closes https://github.com/BerriAI/litellm/issues/5692

* docs(load_balancing.md): add docs on hiding alias models from proxy config

* fix(base.py): don't raise notimplemented error

* fix(user_api_key_auth.py): fix model max budget check

* fix(router.py): fix elif

* fix(user_api_key_auth.py): don't set team_id to empty str

* fix(team_endpoints.py): fix response type

* test(test_completion.py): handle predibase error

* test(test_proxy_server.py): fix test

* fix(o1_transformation.py): fix max_completion_token mapping

* test(test_image_generation.py): mark flaky test
2024-09-14 10:02:55 -07:00
Krish Dholakia
98c34a7e27
LiteLLM Minor Fixes and Improvements (11/09/2024) (#5634)
* fix(caching.py): set ttl for async_increment cache

fixes issue where ttl for redis client was not being set on increment_cache

Fixes https://github.com/BerriAI/litellm/issues/5609

* fix(caching.py): fix increment cache w/ ttl for sync increment cache on redis

Fixes https://github.com/BerriAI/litellm/issues/5609

* fix(router.py): support adding retry policy + allowed fails policy via config.yaml

* fix(router.py): don't cooldown single deployments

No point, as there's no other deployment to loadbalance with.

* fix(user_api_key_auth.py): support setting allowed email domains on jwt tokens

Closes https://github.com/BerriAI/litellm/issues/5605

* docs(token_auth.md): add user upsert + allowed email domain to jwt auth docs

* fix(litellm_pre_call_utils.py): fix dynamic key logging when team id is set

Fixes issue where key logging would not be set if team metadata was not none

* fix(secret_managers/main.py): load environment variables correctly

Fixes issue where os.environ/ was not being loaded correctly

* test(test_router.py): fix test

* feat(spend_tracking_utils.py): support logging additional usage params - e.g. prompt caching values for deepseek

* test: fix tests

* test: fix test

* test: fix test

* test: fix test

* test: fix test
2024-09-11 22:36:06 -07:00
Krish Dholakia
72e961af3c
LiteLLM Minor Fixes and Improvements (08/06/2024) (#5567)
* fix(utils.py): return citations for perplexity streaming

Fixes https://github.com/BerriAI/litellm/issues/5535

* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542)

* fix(anthropic/chat.py): support fallbacks for anthropic streaming

Fixes https://github.com/BerriAI/litellm/issues/5512

* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)

* fix: fix linting errors

* fix(http_handler.py): fix raise_for_status error handling

* test: retry flaky test

* fix otel type

* fix(bedrock/embed): fix error raising

* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded

* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539)

* fix(router.py): support returning model_alias model names in `/v1/models`

* fix(proxy_server.py): support returning model alias'es on `/model/info`

* feat(router.py): support returning model group alias for `/model_group/info`

* fix(proxy_server.py): fix linting errors

* fix(proxy_server.py): fix linting errors

* build(model_prices_and_context_window.json): add amazon titan text premier pricing information

Closes https://github.com/BerriAI/litellm/issues/5560

* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3

* fix(success_handler.py): fix linting error

* fix(success_handler.py): fix linting errors

* fix(team_endpoints.py): Allows admin to update team member budgets

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-06 17:16:24 -07:00
Krish Dholakia
9f3fa29624
feat(router.py): Support Loadbalancing batch azure api endpoints (#5469)
* feat(router.py): initial commit for loadbalancing azure batch api endpoints

Closes https://github.com/BerriAI/litellm/issues/5396

* fix(router.py): working `router.acreate_file()`

* feat(router.py): working router.acreate_batch endpoint

* feat(router.py): expose router.aretrieve_batch function

Make it easy for user to retrieve the batch information

* feat(router.py): support 'router.alist_batches' endpoint

Adds support for getting all batches across all endpoints

* feat(router.py): working loadbalancing on `/v1/files`

* feat(proxy_server.py): working loadbalancing on `/v1/batches`

* feat(proxy_server.py): working loadbalancing on Retrieve + List batch
2024-09-02 21:32:55 -07:00
Krish Dholakia
dd7b008161
fix: Minor LiteLLM Fixes + Improvements (29/08/2024) (#5436)
* fix(model_checks.py): support returning wildcard models on `/v1/models`

Fixes https://github.com/BerriAI/litellm/issues/4903

* fix(bedrock_httpx.py): support calling bedrock via api_base

Closes https://github.com/BerriAI/litellm/pull/4587

* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked

Fixes https://github.com/BerriAI/litellm/issues/5433

* feat(router.py): support setting 'weight' param for models on router

Closes https://github.com/BerriAI/litellm/issues/5410

* test(test_bedrock_completion.py): add unit test for custom api base

* fix(model_checks.py): handle no "/" in model
2024-08-29 22:40:25 -07:00
Krrish Dholakia
f9034ffcee test: rename test to run earlier 2024-08-28 10:52:12 -07:00
Krrish Dholakia
84137afdd8 test: fix test 2024-08-28 10:50:53 -07:00
Krrish Dholakia
33972cc79c fix(router.py): enable dynamic retry after in exception string
Updates cooldown logic to cooldown individual models

 Closes https://github.com/BerriAI/litellm/issues/1339
2024-08-24 16:59:30 -07:00
Krrish Dholakia
76834c6c59 test(test_router.py): add test to ensure retry-after matches received value 2024-08-24 15:21:04 -07:00
Krrish Dholakia
7beb0910c6 test(test_router.py): skip test - create separate pr to match retry after 2024-08-24 15:19:27 -07:00
Krrish Dholakia
de2373d52b fix(openai.py): coverage for correctly re-raising exception headers on openai chat completion + embedding endpoints 2024-08-24 12:55:15 -07:00
Krrish Dholakia
068aafdff9 fix(utils.py): correctly re-raise the headers from an exception, if present
Fixes issue where retry after on router was not using azure / openai numbers
2024-08-24 12:30:30 -07:00
Krrish Dholakia
5a2c9d5121 test(test_router.py): add test to ensure error is correctly re-raised 2024-08-24 10:08:14 -07:00
Krrish Dholakia
0b06a76cf9 fix(router.py): don't cooldown on apiconnectionerrors
Fixes issue where model would be in cooldown due to api connection errors
2024-08-24 09:53:05 -07:00
Ishaan Jaff
d42949cb4a test_router_provider_wildcard_routing 2024-08-07 14:12:40 -07:00
Ishaan Jaff
3249e295cb test provider wildcard routing 2024-08-07 13:52:00 -07:00
Krrish Dholakia
cd94c3adc1 fix(types/router.py): remove model_info pydantic field
Fixes https://github.com/BerriAI/litellm/issues/5042
2024-08-05 09:58:44 -07:00
Krrish Dholakia
826bb125e8 test(test_router.py): handle azure api instability 2024-07-25 19:54:40 -07:00
Krish Dholakia
8661da1980
Merge branch 'main' into litellm_fix_httpx_transport 2024-07-06 19:12:06 -07:00
Krrish Dholakia
67433a04a2 test: fix test 2024-07-02 22:13:41 -07:00
Krrish Dholakia
4b17f2dfdb test: skip bad test 2024-07-02 17:46:50 -07:00
Krrish Dholakia
0894439118 test(test_router.py): fix test 2024-07-02 17:45:33 -07:00
Krish Dholakia
d38f01e956
Merge branch 'main' into litellm_fix_httpx_transport 2024-07-02 17:17:43 -07:00
Ishaan Jaff
d25b079caf fix img gen test 2024-06-29 20:54:22 -07:00
Ishaan Jaff
0bda80ddea test- router when using openai prefix 2024-06-29 17:28:08 -07:00
Krrish Dholakia
c9a424d28d fix(router.py): fix get_router_model_info for azure models 2024-06-28 22:13:29 -07:00
Krrish Dholakia
aa6f7665c4 fix(router.py): only return 'max_tokens', 'input_cost_per_token', etc. in 'get_router_model_info' if base_model is set 2024-06-28 10:45:31 -07:00
Krrish Dholakia
98daedaf60 fix(router.py): fix setting httpx mounts 2024-06-26 17:22:04 -07:00
Krrish Dholakia
341c7857c1 test(test_router.py): add testing 2024-06-24 17:28:12 -07:00
Krrish Dholakia
f5fbdf0fee fix(router.py): use user-defined model_input_tokens for pre-call filter checks 2024-06-24 17:25:26 -07:00
Krrish Dholakia
a31a05d45d feat(dynamic_rate_limiter.py): working e2e 2024-06-22 14:41:22 -07:00
Krrish Dholakia
068e8dff5b feat(dynamic_rate_limiter.py): passing base case 2024-06-21 22:46:46 -07:00
Krrish Dholakia
06b297a6e8 fix(router.py): fix set_client init to check if custom_llm_provider is azure not if in model name
fixes issue where 'azure_ai/' was being init as azureopenai client
2024-06-21 17:09:20 -07:00
Krrish Dholakia
16889b8478 feat(router.py): allow user to call specific deployment via id
Allows easier health checks for specific deployments by just passing in model id
2024-06-19 13:02:46 -07:00
Krrish Dholakia
14b66c3daa fix(router.py): support multiple orgs in 1 model definition
Closes https://github.com/BerriAI/litellm/issues/3949
2024-06-18 19:36:58 -07:00
Krrish Dholakia
6306914e56 fix(types/router.py): modelgroupinfo to handle mode being None and supported_openai_params not being a list 2024-06-08 20:13:45 -07:00
Krrish Dholakia
a7dcf25722 feat(router.py): enable settting 'order' for a deployment in model list
Allows user to control which model gets called first in model group
2024-06-06 09:46:51 -07:00
Krrish Dholakia
1d18ca6a7d fix(router.py): security fix - don't show api key in invalid model setup error message 2024-05-29 16:14:57 -07:00
Krrish Dholakia
cc41db018f test(test_router.py): fix testing 2024-05-21 17:31:31 -07:00
Krrish Dholakia
988970f4c2 feat(router.py): Fixes https://github.com/BerriAI/litellm/issues/3769 2024-05-21 17:24:51 -07:00
Krrish Dholakia
1312eece6d fix(router.py): overloads for better router.acompletion typing 2024-05-13 14:27:16 -07:00
Krrish Dholakia
ebc927f1c8 feat(router.py): allow setting model_region in litellm_params
Closes https://github.com/BerriAI/litellm/issues/3580
2024-05-11 10:18:08 -07:00
Krrish Dholakia
1baad80c7d fix(router.py): cooldown deployments, for 401 errors 2024-04-30 17:54:00 -07:00
Krrish Dholakia
87ff26ff27 fix(router.py): unify retry timeout logic across sync + async function_with_retries 2024-04-30 15:23:19 -07:00
Krrish Dholakia
280148543f fix(router.py): fix trailing slash handling for api base which contains /v1 2024-04-27 17:36:28 -07:00
Krish Dholakia
1a06f009d1
Merge branch 'main' into litellm_default_router_retries 2024-04-27 11:21:57 -07:00