Litellm dev 04 05 2025 p2 (#9774)

* test: move test to just checking async

* fix(transformation.py): handle function call with no schema

* fix(utils.py): handle pydantic base model in message tool calls

Fix https://github.com/BerriAI/litellm/issues/9321

* fix(vertex_and_google_ai_studio.py): handle tools=[]

Fixes https://github.com/BerriAI/litellm/issues/9080

* test: remove max token restriction

* test: fix basic test

* fix(get_supported_openai_params.py): fix check

* fix(converse_transformation.py): support fake streaming for meta.llama3-3-70b-instruct-v1:0

* fix: fix test

* fix: parse out empty dictionary on dbrx streaming + tool calls

* fix(handle-'strict'-param-when-calling-fireworks-ai): fireworks ai does not support 'strict' param

* fix: fix ruff check

'

* fix: handle no strict in function

* fix: revert bedrock change - handle in separate PR
This commit is contained in:
Krish Dholakia 2025-04-07 21:02:52 -07:00 committed by GitHub
parent d8f47fc9e5
commit fcf17d114f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 214 additions and 11 deletions

View file

@ -3368,7 +3368,6 @@ async def test_completion_bedrock_httpx_models(sync_mode, model):
messages=[{"role": "user", "content": "Hey! how's it going?"}],
temperature=0.2,
max_tokens=200,
stop=["stop sequence"],
)
assert isinstance(response, litellm.ModelResponse)
@ -3380,7 +3379,6 @@ async def test_completion_bedrock_httpx_models(sync_mode, model):
messages=[{"role": "user", "content": "Hey! how's it going?"}],
temperature=0.2,
max_tokens=100,
stop=["stop sequence"],
)
assert isinstance(response, litellm.ModelResponse)