Commit graph

598 commits

Author SHA1 Message Date
Krrish Dholakia
f62801f795 fix(router.py): return model alias w/ underlying deployment on router.get_model_list()
Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666
2024-09-07 18:01:31 -07:00
Krish Dholakia
2cab33b061 LiteLLM Minor Fixes and Improvements (08/06/2024) (#5567)
* fix(utils.py): return citations for perplexity streaming

Fixes https://github.com/BerriAI/litellm/issues/5535

* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542)

* fix(anthropic/chat.py): support fallbacks for anthropic streaming

Fixes https://github.com/BerriAI/litellm/issues/5512

* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)

* fix: fix linting errors

* fix(http_handler.py): fix raise_for_status error handling

* test: retry flaky test

* fix otel type

* fix(bedrock/embed): fix error raising

* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded

* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539)

* fix(router.py): support returning model_alias model names in `/v1/models`

* fix(proxy_server.py): support returning model alias'es on `/model/info`

* feat(router.py): support returning model group alias for `/model_group/info`

* fix(proxy_server.py): fix linting errors

* fix(proxy_server.py): fix linting errors

* build(model_prices_and_context_window.json): add amazon titan text premier pricing information

Closes https://github.com/BerriAI/litellm/issues/5560

* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3

* fix(success_handler.py): fix linting error

* fix(success_handler.py): fix linting errors

* fix(team_endpoints.py): Allows admin to update team member budgets

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-06 17:16:24 -07:00
Ishaan Jaff
7370a994f5 use correct type hints for audio transcriptions 2024-09-05 09:12:27 -07:00
Krish Dholakia
6fdee99632 LiteLLM Minor fixes + improvements (08/04/2024) (#5505)
* Minor IAM AWS OIDC Improvements (#5246)

* AWS IAM: Temporary tokens are valid across all regions after being issued, so it is wasteful to request one for each region.

* AWS IAM: Include an inline policy, to help reduce misuse of overly permissive IAM roles.

* (test_bedrock_completion.py): Ensure we are testing cross AWS region OIDC flow.

* fix(router.py): log rejected requests

Fixes https://github.com/BerriAI/litellm/issues/5498

* refactor: don't use verbose_logger.exception, if exception is raised

User might already have handling for this. But alerting systems in prod will raise this as an unhandled error.

* fix(datadog.py): support setting datadog source as an env var

Fixes https://github.com/BerriAI/litellm/issues/5508

* docs(logging.md): add dd_source to datadog docs

* fix(proxy_server.py): expose `/customer/list` endpoint for showing all customers

* (bedrock): Fix usage with Cloudflare AI Gateway, and proxies in general. (#5509)

* feat(anthropic.py): support 'cache_control' param for content when it is a string

* Revert "(bedrock): Fix usage with Cloudflare AI Gateway, and proxies in gener…" (#5519)

This reverts commit 3fac0349c2.

* refactor: ci/cd run again

---------

Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
2024-09-04 22:16:55 -07:00
Krish Dholakia
18da7adce9 feat(router.py): Support Loadbalancing batch azure api endpoints (#5469)
* feat(router.py): initial commit for loadbalancing azure batch api endpoints

Closes https://github.com/BerriAI/litellm/issues/5396

* fix(router.py): working `router.acreate_file()`

* feat(router.py): working router.acreate_batch endpoint

* feat(router.py): expose router.aretrieve_batch function

Make it easy for user to retrieve the batch information

* feat(router.py): support 'router.alist_batches' endpoint

Adds support for getting all batches across all endpoints

* feat(router.py): working loadbalancing on `/v1/files`

* feat(proxy_server.py): working loadbalancing on `/v1/batches`

* feat(proxy_server.py): working loadbalancing on Retrieve + List batch
2024-09-02 21:32:55 -07:00
Krish Dholakia
ca4e746545 LiteLLM minor fixes + improvements (31/08/2024) (#5464)
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints

* test(test_streaming.py): skip model due to end of life

* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits

Closes https://github.com/BerriAI/litellm/issues/4096
2024-09-01 13:31:42 -07:00
Krish Dholakia
321b0961b5 fix: Minor LiteLLM Fixes + Improvements (29/08/2024) (#5436)
* fix(model_checks.py): support returning wildcard models on `/v1/models`

Fixes https://github.com/BerriAI/litellm/issues/4903

* fix(bedrock_httpx.py): support calling bedrock via api_base

Closes https://github.com/BerriAI/litellm/pull/4587

* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked

Fixes https://github.com/BerriAI/litellm/issues/5433

* feat(router.py): support setting 'weight' param for models on router

Closes https://github.com/BerriAI/litellm/issues/5410

* test(test_bedrock_completion.py): add unit test for custom api base

* fix(model_checks.py): handle no "/" in model
2024-08-29 22:40:25 -07:00
Krrish Dholakia
6e4f0a95da fix(router.py): fix cooldown check 2024-08-28 16:38:42 -07:00
Ishaan Jaff
df4f1458e6 feat - add rerank on proxy 2024-08-27 17:36:40 -07:00
Krrish Dholakia
c558648180 fix(router.py): fix aembedding type hints
Fixes https://github.com/BerriAI/litellm/issues/5383
2024-08-27 14:29:18 -07:00
Krrish Dholakia
c795e9feeb fix(router.py): enable dynamic retry after in exception string
Updates cooldown logic to cooldown individual models

 Closes https://github.com/BerriAI/litellm/issues/1339
2024-08-24 16:59:30 -07:00
Krrish Dholakia
27a5cd12e0 fix(utils.py): correctly re-raise the headers from an exception, if present
Fixes issue where retry after on router was not using azure / openai numbers
2024-08-24 12:30:30 -07:00
Krrish Dholakia
6415f92bbb fix(router.py): don't cooldown on apiconnectionerrors
Fixes issue where model would be in cooldown due to api connection errors
2024-08-24 09:53:05 -07:00
Krrish Dholakia
45048ee006 fix(router.py): fix linting error 2024-08-21 15:35:10 -07:00
Ishaan Jaff
528bb3f7ac test test_using_default_working_fallback 2024-08-20 13:32:55 -07:00
Ishaan Jaff
165e0e3ad1 fix run sync fallbacks 2024-08-20 12:55:36 -07:00
Ishaan Jaff
078fe97053 fix fallbacks dont recurse on the same fallback 2024-08-20 12:50:20 -07:00
Ishaan Jaff
fb16ff2335 fix don't retry errors when no healthy deployments available 2024-08-20 12:17:05 -07:00
Ishaan Jaff
5e2f962ba3 test + never retry on 404 errors 2024-08-20 11:59:43 -07:00
Ishaan Jaff
7171efc729 use model access groups for teams 2024-08-17 16:45:53 -07:00
Krrish Dholakia
2874b94fb1 refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
Ishaan Jaff
25af3ffe5b v0 track fallback events 2024-08-10 13:31:00 -07:00
Krrish Dholakia
482acc7ee1 fix(router.py): fallback on 400-status code requests 2024-08-09 12:16:49 -07:00
Krrish Dholakia
07e5847e65 feat(router.py): allow using .acompletion() for request prioritization
allows /chat/completion endpoint to work for request prioritization calls
2024-08-07 16:43:12 -07:00
Ishaan Jaff
a0b2c107c4 fix getting provider_specific_deployment 2024-08-07 15:20:59 -07:00
Ishaan Jaff
31e4fca748 fix use provider specific routing 2024-08-07 14:37:20 -07:00
Ishaan Jaff
bb9493e5f7 router use provider specific wildcard routing 2024-08-07 14:12:10 -07:00
Ishaan Jaff
6a1a4eb822 add + test provider specific routing 2024-08-07 13:49:46 -07:00
Krrish Dholakia
0de640700d fix(router.py): add reason for fallback failure to client-side exception string
make it easier to debug why a fallback failed to occur
2024-08-07 13:02:47 -07:00
Ishaan Jaff
0dd8f50477 use router_cooldown_handler 2024-08-07 10:40:55 -07:00
Krrish Dholakia
fdb47e5479 fix: fix test to specify allowed_fails 2024-08-05 21:39:59 -07:00
Krrish Dholakia
934883999a fix(router.py): move deployment cooldown list message to error log, not client-side
don't show user all deployments
2024-08-03 12:49:39 -07:00
Krrish Dholakia
b0d2727bbf feat(router.py): add flag for mock testing loadbalancing for rate limit errors 2024-08-03 12:34:11 -07:00
Krrish Dholakia
dc58b9f33e fix(utils.py): fix linting errors 2024-07-30 18:38:10 -07:00
Krrish Dholakia
96ad9c877c fix(router.py): gracefully handle scenario where completion response doesn't have total tokens
Closes https://github.com/BerriAI/litellm/issues/4968
2024-07-30 15:14:03 -07:00
Krrish Dholakia
3a1eedfbf3 feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
2024-07-26 21:51:54 -07:00
Krrish Dholakia
e39ff46222 docs(config.md): update wildcard docs 2024-07-26 08:59:53 -07:00
Ishaan Jaff
a46c463dee router support setting pass_through_all_models 2024-07-25 18:34:12 -07:00
Krrish Dholakia
1d33759bb1 fix(router.py): add support for diskcache to router 2024-07-25 14:30:46 -07:00
Ishaan Jaff
7888074012 fix - test router debug logs 2024-07-20 18:45:31 -07:00
Ishaan Jaff
fcee8bc61f router - use verbose logger when using litellm.Router 2024-07-20 17:36:25 -07:00
Ishaan Jaff
d1a4246d2b control using enable_tag_filtering 2024-07-18 19:39:04 -07:00
Ishaan Jaff
cd40d58544 router - refactor to tag based routing 2024-07-18 19:22:09 -07:00
Ishaan Jaff
778cb8799e Merge pull request #4786 from BerriAI/litellm_use_model_tier_keys
[Feat-Enterprise] Use free/paid tiers for Virtual Keys
2024-07-18 18:07:09 -07:00
Krrish Dholakia
5d0bb0c6ee fix(utils.py): fix status code in exception mapping 2024-07-18 18:04:59 -07:00
Ishaan Jaff
d4cad75d34 router - use free paid tier routing 2024-07-18 17:09:42 -07:00
Krrish Dholakia
432b7ae264 fix(router.py): check for request_timeout in acompletion
support 'request_timeout' param in router acompletion
2024-07-17 17:19:06 -07:00
Ishaan Jaff
dc5c72d04e router return get_deployment_by_model_group_name 2024-07-15 19:27:12 -07:00
Krish Dholakia
f4d140efec Merge pull request #4635 from BerriAI/litellm_anthropic_adapter
Anthropic `/v1/messages` endpoint support
2024-07-10 22:41:53 -07:00
Krrish Dholakia
48be4ce805 feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
2024-07-10 18:53:54 -07:00