litellm-mirror/litellm/llms
Ishaan Jaff b965d9bd9a
Fix passing top_k parameter for Bedrock Anthropic models (#8131) (#8269)
* Fix Bedrock Anthropic topK bug

* Remove extra import

* Add unit test + make tests mocked

* Fix camel case

* Fix tests to remove exception handling

Co-authored-by: vibhavbhat <vibhavb00@gmail.com>
2025-02-04 21:16:21 -08:00
..
ai21/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
aiohttp_openai/chat (proxy - RPS) - Get 2K RPS at 4 instances, minor fix aiohttp_openai/ (#7659) 2025-01-09 17:24:18 -08:00
anthropic Litellm dev 01 25 2025 p2 (#8003) 2025-01-25 16:50:57 -08:00
azure LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 (#7828) 2025-02-02 23:17:50 -08:00
azure_ai fix unused imports 2025-01-02 22:28:22 -08:00
base_llm Complete o3 model support (#8183) 2025-02-02 22:36:37 -08:00
bedrock Fix passing top_k parameter for Bedrock Anthropic models (#8131) (#8269) 2025-02-04 21:16:21 -08:00
cerebras (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
clarifai Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
cloudflare/chat Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
codestral/completion LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 (#7828) 2025-02-02 23:17:50 -08:00
cohere Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
custom_httpx Litellm dev 01 29 2025 p2 (#8102) 2025-01-29 20:53:37 -08:00
databricks feat(databricks/chat/transformation.py): add tools and 'tool_choice' param support (#8076) 2025-01-29 21:09:07 -08:00
deepgram Litellm dev 01 02 2025 p2 (#7512) 2025-01-02 21:57:51 -08:00
deepinfra/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
deepseek LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
deprecated_providers (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
empower/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
fireworks_ai Fix custom pricing - separate provider info from model info (#7990) 2025-01-25 21:49:28 -08:00
friendliai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
galadriel/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
gemini Litellm dev 01 2025 p4 (#7776) 2025-01-14 21:49:25 -08:00
github/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
groq fix(groq/chat/transformation.py): fix groq response_format transformation (#7565) 2025-01-04 19:39:04 -08:00
hosted_vllm (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
huggingface fix(proxy_server.py): fix get model info when litellm_model_id is set + move model analytics to free (#7886) 2025-01-21 08:19:07 -08:00
infinity/rerank (feat) add infinity rerank models (#7321) 2024-12-19 18:30:28 -08:00
jina_ai (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
litellm_proxy/chat [BETA] Add OpenAI /images/variations + Topaz API support (#7700) 2025-01-11 23:27:46 -08:00
lm_studio LiteLLM Minor Fixes & Improvements (2024/16/01) (#7826) 2025-01-17 20:59:21 -08:00
mistral _handle_tool_call_message linting 2025-01-16 22:34:16 -08:00
nlp_cloud Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
nvidia_nim (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
ollama Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
oobabooga Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
openai test(base_llm_unit_tests.py): add test to ensure drop params is respe… (#8224) 2025-02-03 16:04:44 -08:00
openai_like fix: propagating json_mode to acompletion (#8133) 2025-01-30 21:17:26 -08:00
openrouter/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
perplexity/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
petals Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
predibase Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
replicate Retry for replicate completion response of status=processing (#7901) (#7965) 2025-01-23 22:45:43 -08:00
sagemaker Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
sambanova (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
together_ai add azure o1 pricing (#7715) 2025-01-12 18:15:35 -08:00
topaz Fix custom pricing - separate provider info from model info (#7990) 2025-01-25 21:49:28 -08:00
triton Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
vertex_ai Litellm dev 01 14 2025 p2 (#7772) 2025-01-15 21:34:50 -08:00
vllm/completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
voyage/embedding Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
watsonx Deepseek r1 support + watsonx qa improvements (#7907) 2025-01-21 23:13:15 -08:00
xai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
base.py Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
baseten.py test(base_llm_unit_tests.py): add test to ensure drop params is respe… (#8224) 2025-02-03 16:04:44 -08:00
custom_llm.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
maritalk.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
ollama_chat.py LiteLLM Minor Fixes & Improvements (01/18/2025) - p1 (#7857) 2025-01-18 19:03:50 -08:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
volcengine.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.