mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
Litellm dev 12 12 2024 (#7203)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 47s
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 47s
* fix(azure/): support passing headers to azure openai endpoints
Fixes https://github.com/BerriAI/litellm/issues/6217
* fix(utils.py): move default tokenizer to just openai
hf tokenizer makes network calls when trying to get the tokenizer - this slows down execution time calls
* fix(router.py): fix pattern matching router - add generic "*" to it as well
Fixes issue where generic "*" model access group wouldn't show up
* fix(pattern_match_deployments.py): match to more specific pattern
match to more specific pattern
allows setting generic wildcard model access group and excluding specific models more easily
* fix(proxy_server.py): fix _delete_deployment to handle base case where db_model list is empty
don't delete all router models b/c of empty list
Fixes https://github.com/BerriAI/litellm/issues/7196
* fix(anthropic/): fix handling response_format for anthropic messages with anthropic api
* fix(fireworks_ai/): support passing response_format + tool call in same message
Addresses https://github.com/BerriAI/litellm/issues/7135
* Revert "fix(fireworks_ai/): support passing response_format + tool call in same message"
This reverts commit 6a30dc6929
.
* test: fix test
* fix(replicate/): fix replicate default retry/polling logic
* test: add unit testing for router pattern matching
* test: update test to use default oai tokenizer
* test: mark flaky test
* test: skip flaky test
This commit is contained in:
parent
15a0572a06
commit
e68bb4e051
19 changed files with 496 additions and 103 deletions
|
@ -4019,15 +4019,15 @@ class Router:
|
|||
|
||||
# Check if user is trying to use model_name == "*"
|
||||
# this is a catch all model for their specific api key
|
||||
if deployment.model_name == "*":
|
||||
if deployment.litellm_params.model == "*":
|
||||
# user wants to pass through all requests to litellm.acompletion for unknown deployments
|
||||
self.router_general_settings.pass_through_all_models = True
|
||||
else:
|
||||
self.default_deployment = deployment.to_json(exclude_none=True)
|
||||
# if deployment.model_name == "*":
|
||||
# if deployment.litellm_params.model == "*":
|
||||
# # user wants to pass through all requests to litellm.acompletion for unknown deployments
|
||||
# self.router_general_settings.pass_through_all_models = True
|
||||
# else:
|
||||
# self.default_deployment = deployment.to_json(exclude_none=True)
|
||||
# Check if user is using provider specific wildcard routing
|
||||
# example model_name = "databricks/*" or model_name = "anthropic/*"
|
||||
elif "*" in deployment.model_name:
|
||||
if "*" in deployment.model_name:
|
||||
# store this as a regex pattern - all deployments matching this pattern will be sent to this deployment
|
||||
# Store deployment.model_name as a regex pattern
|
||||
self.pattern_router.add_pattern(
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue