litellm-mirror/litellm/llms
Ishaan Jaff f47987e673
(Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013)
* anthropic_messages_handler v0

* fix /messages

* working messages with router methods

* test_anthropic_messages_handler_litellm_router_non_streaming

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* AnthropicMessagesConfig

* _handle_anthropic_messages_response_logging

* working with /v1/messages endpoint

* working /v1/messages endpoint

* refactor to use router factory function

* use aanthropic_messages

* use BaseConfig for Anthropic /v1/messages

* track api key, team on /v1/messages endpoint

* fix get_logging_payload

* BaseAnthropicMessagesTest

* align test config

* test_anthropic_messages_with_thinking

* test_anthropic_streaming_with_thinking

* fix - display anthropic url for debugging

* test_bad_request_error_handling

* test_anthropic_messages_router_streaming_with_bad_request

* fix ProxyException

* test_bad_request_error_handling_streaming

* use provider_specific_header

* test_anthropic_messages_with_extra_headers

* test_anthropic_messages_to_wildcard_model

* fix gcs pub sub test

* standard_logging_payload

* fix unit testing for anthopic /v1/messages support

* fix pass through anthropic messages api

* delete dead code

* fix anthropic pass through response

* revert change to spend tracking utils

* fix get_litellm_metadata_from_kwargs

* fix spend logs payload json

* proxy_pass_through_endpoint_tests

* TestAnthropicPassthroughBasic

* fix pass through tests

* test_async_vertex_proxy_route_api_key_auth

* _handle_anthropic_messages_response_logging

* vertex_credentials

* test_set_default_vertex_config

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* test_ageneric_api_call_with_fallbacks_basic

* test__aadapter_completion
2025-03-06 00:43:08 -08:00
..
ai21/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
aiohttp_openai/chat [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
anthropic (Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013) 2025-03-06 00:43:08 -08:00
azure Litellm dev 03 04 2025 p3 (#8997) 2025-03-04 21:58:03 -08:00
azure_ai [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
base_llm (Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013) 2025-03-06 00:43:08 -08:00
bedrock Litellm dev 03 05 2025 p3 (#9023) 2025-03-05 22:31:39 -08:00
cerebras (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
clarifai fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
cloudflare/chat [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
codestral/completion LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 (#7828) 2025-02-02 23:17:50 -08:00
cohere Add cohere v2/rerank support (#8421) (#8605) 2025-02-22 22:25:29 -08:00
custom_httpx Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) 2025-02-28 17:56:26 -08:00
databricks fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
deepgram Litellm dev 01 02 2025 p2 (#7512) 2025-01-02 21:57:51 -08:00
deepinfra/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
deepseek [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
deprecated_providers fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
empower/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
fireworks_ai Fix bedrock passing response_format: {"type": "text"} (#8900) 2025-02-28 20:09:59 -08:00
friendliai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
galadriel/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
gemini Support format param for specifying image type (#9019) 2025-03-05 19:52:53 -08:00
github/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
groq fix(groq/chat/transformation.py): fix groq response_format transformation (#7565) 2025-01-04 19:39:04 -08:00
hosted_vllm (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
huggingface fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
infinity/rerank add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684) 2025-02-20 21:23:54 -08:00
jina_ai Add cohere v2/rerank support (#8421) (#8605) 2025-02-22 22:25:29 -08:00
litellm_proxy/chat [BETA] Add OpenAI /images/variations + Topaz API support (#7700) 2025-01-11 23:27:46 -08:00
lm_studio fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
mistral _handle_tool_call_message linting 2025-01-16 22:34:16 -08:00
nlp_cloud fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
nvidia_nim fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
ollama [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
oobabooga Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
openai Support format param for specifying image type (#9019) 2025-03-05 19:52:53 -08:00
openai_like fix: propagating json_mode to acompletion (#8133) 2025-01-30 21:17:26 -08:00
openrouter/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
perplexity/chat pplx - fix supports tool choice openai param (#8496) 2025-02-12 17:21:16 -08:00
petals fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
predibase fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
replicate [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
sagemaker Litellm dev 02 27 2025 p6 (#8891) 2025-02-28 14:34:17 -08:00
sambanova update sambanova docs (#8875) 2025-02-27 20:23:33 -08:00
together_ai add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684) 2025-02-20 21:23:54 -08:00
topaz (Feat) - Add /bedrock/invoke support for all Anthropic models (#8383) 2025-02-07 22:41:11 -08:00
triton Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
vertex_ai Support format param for specifying image type (#9019) 2025-03-05 19:52:53 -08:00
vllm/completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
voyage/embedding Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
watsonx [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
xai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
base.py Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
baseten.py test(base_llm_unit_tests.py): add test to ensure drop params is respe… (#8224) 2025-02-03 16:04:44 -08:00
custom_llm.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
maritalk.py fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
ollama_chat.py fix: ollama chat async stream error propagation (#8870) 2025-02-28 08:11:56 -08:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
volcengine.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.