LiteLLM Minor Fixes & Improvements (10/28/2024) (#6475)

* fix(anthropic/chat/transformation.py): support anthropic disable_parallel_tool_use param

Fixes https://github.com/BerriAI/litellm/issues/6456

* feat(anthropic/chat/transformation.py): support anthropic computer tool use

Closes https://github.com/BerriAI/litellm/issues/6427

* fix(vertex_ai/common_utils.py): parse out '$schema' when calling vertex ai

Fixes issue when trying to call vertex from vercel sdk

* fix(main.py): add 'extra_headers' support for azure on all translation endpoints

Fixes https://github.com/BerriAI/litellm/issues/6465

* fix: fix linting errors

* fix(transformation.py): handle no beta headers for anthropic

* test: cleanup test

* fix: fix linting error

* fix: fix linting errors

* fix: fix linting errors

* fix(transformation.py): handle dummy tool call

* fix(main.py): fix linting error

* fix(azure.py): pass required param

* LiteLLM Minor Fixes & Improvements (10/24/2024) (#6441)

* fix(azure.py): handle /openai/deployment in azure api base

* fix(factory.py): fix faulty anthropic tool result translation check

Fixes https://github.com/BerriAI/litellm/issues/6422

* fix(gpt_transformation.py): add support for parallel_tool_calls to azure

Fixes https://github.com/BerriAI/litellm/issues/6440

* fix(factory.py): support anthropic prompt caching for tool results

* fix(vertex_ai/common_utils): don't pop non-null required field

Fixes https://github.com/BerriAI/litellm/issues/6426

* feat(vertex_ai.py): support code_execution tool call for vertex ai + gemini

Closes https://github.com/BerriAI/litellm/issues/6434

* build(model_prices_and_context_window.json): Add 'supports_assistant_prefill' for bedrock claude-3-5-sonnet v2 models

Closes https://github.com/BerriAI/litellm/issues/6437

* fix(types/utils.py): fix linting

* test: update test to include required fields

* test: fix test

* test: handle flaky test

* test: remove e2e test - hitting gemini rate limits

* Litellm dev 10 26 2024 (#6472)

* docs(exception_mapping.md): add missing exception types

Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183

* fix(main.py): register custom model pricing with specific key

Ensure custom model pricing is registered to the specific model+provider key combination

* test: make testing more robust for custom pricing

* fix(redis_cache.py): instrument otel logging for sync redis calls

ensures complete coverage for all redis cache calls

* (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected  (#6471)

* test test_dual_cache_get_set

* unit testing for dual cache

* fix async_set_cache_sadd

* test_dual_cache_local_only

* redis otel tracing + async support for latency routing (#6452)

* docs(exception_mapping.md): add missing exception types

Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183

* fix(main.py): register custom model pricing with specific key

Ensure custom model pricing is registered to the specific model+provider key combination

* test: make testing more robust for custom pricing

* fix(redis_cache.py): instrument otel logging for sync redis calls

ensures complete coverage for all redis cache calls

* refactor: pass parent_otel_span for redis caching calls in router

allows for more observability into what calls are causing latency issues

* test: update tests with new params

* refactor: ensure e2e otel tracing for router

* refactor(router.py): add more otel tracing acrosss router

catch all latency issues for router requests

* fix: fix linting error

* fix(router.py): fix linting error

* fix: fix test

* test: fix tests

* fix(dual_cache.py): pass ttl to redis cache

* fix: fix param

* fix(dual_cache.py): set default value for parent_otel_span

* fix(transformation.py): support 'response_format' for anthropic calls

* fix(transformation.py): check for cache_control inside 'function' block

* fix: fix linting error

* fix: fix linting errors

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
This commit is contained in:
Krish Dholakia 2024-10-29 17:20:24 -07:00 committed by GitHub
parent 9431c1a1b5
commit 41339dfbed
19 changed files with 684 additions and 253 deletions

View file

@ -3377,6 +3377,9 @@ def embedding( # noqa: PLR0915
"azure_ad_token", None
) or get_secret_str("AZURE_AD_TOKEN")
if extra_headers is not None:
optional_params["extra_headers"] = extra_headers
api_key = (
api_key
or litellm.api_key
@ -4458,7 +4461,10 @@ def image_generation( # noqa: PLR0915
metadata = kwargs.get("metadata", {})
litellm_logging_obj: LiteLLMLoggingObj = kwargs.get("litellm_logging_obj") # type: ignore
client = kwargs.get("client", None)
extra_headers = kwargs.get("extra_headers", None)
headers: dict = kwargs.get("headers", None) or {}
if extra_headers is not None:
headers.update(extra_headers)
model_response: ImageResponse = litellm.utils.ImageResponse()
if model is not None or custom_llm_provider is not None:
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base) # type: ignore
@ -4589,6 +4595,14 @@ def image_generation( # noqa: PLR0915
"azure_ad_token", None
) or get_secret_str("AZURE_AD_TOKEN")
default_headers = {
"Content-Type": "application/json;",
"api-key": api_key,
}
for k, v in default_headers.items():
if k not in headers:
headers[k] = v
model_response = azure_chat_completions.image_generation(
model=model,
prompt=prompt,
@ -4601,6 +4615,7 @@ def image_generation( # noqa: PLR0915
api_version=api_version,
aimg_generation=aimg_generation,
client=client,
headers=headers,
)
elif custom_llm_provider == "openai":
model_response = openai_chat_completions.image_generation(
@ -4797,11 +4812,7 @@ def transcription(
"""
atranscription = kwargs.get("atranscription", False)
litellm_logging_obj: LiteLLMLoggingObj = kwargs.get("litellm_logging_obj") # type: ignore
kwargs.get("litellm_call_id", None)
kwargs.get("logger_fn", None)
kwargs.get("proxy_server_request", None)
kwargs.get("model_info", None)
kwargs.get("metadata", {})
extra_headers = kwargs.get("extra_headers", None)
kwargs.pop("tags", [])
drop_params = kwargs.get("drop_params", None)
@ -4857,6 +4868,8 @@ def transcription(
or get_secret_str("AZURE_API_KEY")
)
optional_params["extra_headers"] = extra_headers
response = azure_audio_transcriptions.audio_transcriptions(
model=model,
audio_file=file,
@ -4975,6 +4988,7 @@ def speech(
user = kwargs.get("user", None)
litellm_call_id: Optional[str] = kwargs.get("litellm_call_id", None)
proxy_server_request = kwargs.get("proxy_server_request", None)
extra_headers = kwargs.get("extra_headers", None)
model_info = kwargs.get("model_info", None)
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base) # type: ignore
kwargs.pop("tags", [])
@ -5087,7 +5101,8 @@ def speech(
"AZURE_AD_TOKEN"
)
headers = headers or litellm.headers
if extra_headers:
optional_params["extra_headers"] = extra_headers
response = azure_chat_completions.audio_speech(
model=model,