LiteLLM Minor Fixes and Improvements (#5537)

* fix(vertex_ai): Fixes issue where multimodal message without text was failing vertex calls

Fixes https://github.com/BerriAI/litellm/issues/5515

* fix(azure.py): move to using httphandler for oidc token calls

Fixes issue where ssl certificates weren't being picked up as expected

Closes https://github.com/BerriAI/litellm/issues/5522

* feat: Allows admin to set a default_max_internal_user_budget in config, and allow setting more specific values as env vars

* fix(proxy_server.py): fix read for max_internal_user_budget

* build(model_prices_and_context_window.json): add regional gpt-4o-2024-08-06 pricing

Closes https://github.com/BerriAI/litellm/issues/5540

* test: skip re-test
This commit is contained in:
Krish Dholakia 2024-09-05 18:03:34 -07:00 committed by GitHub
parent a074f5801e
commit 355f4a7c90
10 changed files with 117 additions and 5 deletions

View file

@ -1645,6 +1645,14 @@ class ProxyConfig:
verbose_proxy_logger.debug(
f"litellm.post_call_rules: {litellm.post_call_rules}"
)
elif key == "max_internal_user_budget":
litellm.max_internal_user_budget = float(value) # type: ignore
elif key == "default_max_internal_user_budget":
litellm.default_max_internal_user_budget = float(value)
if litellm.max_internal_user_budget is None:
litellm.max_internal_user_budget = (
litellm.default_max_internal_user_budget
)
elif key == "custom_provider_map":
from litellm.utils import custom_llm_setup