mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 03:34:10 +00:00
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation Closes https://github.com/BerriAI/litellm/issues/8777 * fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7 Resolves https://github.com/BerriAI/litellm/issues/8777 * feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock * fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py simpler logic * feat(bedrock/): fix streaming to return blocks in consistent format * fix: fix linting error * test: fix test * feat(factory.py): fix bedrock thinking block translation on tool calling allows passing the thinking blocks back to bedrock for tool calling * fix(types/utils.py): don't exclude provider_specific_fields on model dump ensures consistent responses * fix: fix linting errors * fix(convert_dict_to_response.py): pass reasoning_content on root * fix: test * fix(streaming_handler.py): add helper util for setting model id * fix(streaming_handler.py): fix setting model id on model response stream chunk * fix(streaming_handler.py): fix linting error * fix(streaming_handler.py): fix linting error * fix(types/utils.py): add provider_specific_fields to model stream response * fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response * fix(streaming_handler.py): fix check * fix: fix test * fix(types/utils.py): ensure messages content is always openai compatible * fix(types/utils.py): fix delta object to always be openai compatible only introduce new params if variable exists * test: fix bedrock nova tests * test: skip flaky test * test: skip flaky test in ci/cd |
||
---|---|---|
.. | ||
ai21/chat | ||
aiohttp_openai/chat | ||
anthropic | ||
azure | ||
azure_ai | ||
base_llm | ||
bedrock | ||
cerebras | ||
clarifai | ||
cloudflare/chat | ||
codestral/completion | ||
cohere | ||
custom_httpx | ||
databricks | ||
deepgram | ||
deepinfra/chat | ||
deepseek | ||
deprecated_providers | ||
empower/chat | ||
fireworks_ai | ||
friendliai/chat | ||
galadriel/chat | ||
gemini | ||
github/chat | ||
groq | ||
hosted_vllm | ||
huggingface | ||
infinity/rerank | ||
jina_ai | ||
litellm_proxy/chat | ||
lm_studio | ||
mistral | ||
nlp_cloud | ||
nvidia_nim | ||
ollama | ||
oobabooga | ||
openai | ||
openai_like | ||
openrouter/chat | ||
perplexity/chat | ||
petals | ||
predibase | ||
replicate | ||
sagemaker | ||
sambanova | ||
together_ai | ||
topaz | ||
triton | ||
vertex_ai | ||
vllm/completion | ||
voyage/embedding | ||
watsonx | ||
xai/chat | ||
__init__.py | ||
base.py | ||
baseten.py | ||
custom_llm.py | ||
maritalk.py | ||
ollama_chat.py | ||
README.md | ||
volcengine.py |
File Structure
August 27th, 2024
To make it easy to see how calls are transformed for each model/provider:
we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.
Each folder will contain a *_transformation.py
file, which has all the request/response transformation logic, making it easy to see how calls are modified.
E.g. cohere/
, bedrock/
.