mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
Litellm dev bedrock anthropic 3 7 v2 (#8843)
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation Closes https://github.com/BerriAI/litellm/issues/8777 * fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7 Resolves https://github.com/BerriAI/litellm/issues/8777 * feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock * fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py simpler logic * feat(bedrock/): fix streaming to return blocks in consistent format * fix: fix linting error * test: fix test * feat(factory.py): fix bedrock thinking block translation on tool calling allows passing the thinking blocks back to bedrock for tool calling * fix(types/utils.py): don't exclude provider_specific_fields on model dump ensures consistent responses * fix: fix linting errors * fix(convert_dict_to_response.py): pass reasoning_content on root * fix: test * fix(streaming_handler.py): add helper util for setting model id * fix(streaming_handler.py): fix setting model id on model response stream chunk * fix(streaming_handler.py): fix linting error * fix(streaming_handler.py): fix linting error * fix(types/utils.py): add provider_specific_fields to model stream response * fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response * fix(streaming_handler.py): fix check * fix: fix test * fix(types/utils.py): ensure messages content is always openai compatible * fix(types/utils.py): fix delta object to always be openai compatible only introduce new params if variable exists * test: fix bedrock nova tests * test: skip flaky test * test: skip flaky test in ci/cd
This commit is contained in:
parent
40a3af7d61
commit
ab7c4d1a0e
20 changed files with 447 additions and 149 deletions
|
@ -130,14 +130,7 @@ async def test_create_llm_obs_payload():
|
|||
assert payload["meta"]["input"]["messages"] == [
|
||||
{"role": "user", "content": "Hello, world!"}
|
||||
]
|
||||
assert payload["meta"]["output"]["messages"] == [
|
||||
{
|
||||
"content": "Hi there!",
|
||||
"role": "assistant",
|
||||
"tool_calls": None,
|
||||
"function_call": None,
|
||||
}
|
||||
]
|
||||
assert payload["meta"]["output"]["messages"][0]["content"] == "Hi there!"
|
||||
assert payload["metrics"]["input_tokens"] == 20
|
||||
assert payload["metrics"]["output_tokens"] == 10
|
||||
assert payload["metrics"]["total_tokens"] == 30
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue