litellm-mirror/litellm/llms
Krish Dholakia b961f96b35 Litellm dev 12 25 2024 p1 (#7411)
* test(test_watsonx.py): e2e unit test for watsonx custom header

covers https://github.com/BerriAI/litellm/issues/7408

* fix(common_utils.py): handle auth token already present in headers (watsonx + openai-like base handler)

Fixes https://github.com/BerriAI/litellm/issues/7408

* fix(watsonx/chat): fix chat route

Fixes https://github.com/BerriAI/litellm/issues/7408

* fix(huggingface/chat/handler.py): fix huggingface async completion calls

* Correct handling of max_retries=0 to disable AzureOpenAI retries (#7379)

* test: fix test

---------

Co-authored-by: Minh Duc <phamminhduc0711@gmail.com>
2024-12-25 17:36:30 -08:00
..
ai21/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
anthropic LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
azure Litellm dev 12 25 2024 p1 (#7411) 2024-12-25 17:36:30 -08:00
azure_ai (code refactor) - Add BaseRerankConfig. Use BaseRerankConfig for cohere/rerank and azure_ai/rerank (#7319) 2024-12-19 17:03:34 -08:00
base_llm Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
bedrock Litellm dev 12 24 2024 p4 (#7407) 2024-12-24 20:24:06 -08:00
cerebras (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
clarifai (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
cloudflare/chat Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
codestral/completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
cohere (code refactor) - Add BaseRerankConfig. Use BaseRerankConfig for cohere/rerank and azure_ai/rerank (#7319) 2024-12-19 17:03:34 -08:00
custom_httpx Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
databricks (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
deepinfra/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
deepseek LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
deprecated_providers (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
empower/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
fireworks_ai (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
friendliai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
galadriel/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
gemini LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
github/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
groq (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
hosted_vllm (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
huggingface Litellm dev 12 25 2024 p1 (#7411) 2024-12-25 17:36:30 -08:00
infinity/rerank (feat) add infinity rerank models (#7321) 2024-12-19 18:30:28 -08:00
jina_ai (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
lm_studio (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
mistral (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
nlp_cloud (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
nvidia_nim (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
ollama Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
oobabooga (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
openai (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint (#7406) 2024-12-24 20:23:50 -08:00
openai_like Litellm dev 12 25 2024 p1 (#7411) 2024-12-25 17:36:30 -08:00
openrouter/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
perplexity/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
petals (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
predibase (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
replicate Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
sagemaker (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
sambanova (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
together_ai (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
triton Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
vertex_ai Litellm dev 12 24 2024 p3 (#7403) 2024-12-24 18:07:53 -08:00
vllm/completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
voyage/embedding Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
watsonx Litellm dev 12 25 2024 p1 (#7411) 2024-12-25 17:36:30 -08:00
xai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
base.py Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
baseten.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
custom_llm.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
maritalk.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
ollama_chat.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
volcengine.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.