litellm-mirror/litellm/llms
Krish Dholakia c3f637012b Litellm dev 12 13 2024 p1 (#7219)
* fix(litellm_logging.py): pass user metadata to langsmith on sdk calls

* fix(litellm_logging.py): pass nested user metadata to logging integration - e.g. langsmith

* fix(exception_mapping_utils.py): catch and clarify watsonx `/text/chat` endpoint not supported error message.

Closes https://github.com/BerriAI/litellm/issues/7213

* fix(watsonx/common_utils.py): accept new 'WATSONX_IAM_URL' env var

allows user to use local watsonx

Fixes https://github.com/BerriAI/litellm/issues/4991

* fix(litellm_logging.py): cleanup unused function

* test: skip bad ibm test
2024-12-13 19:01:28 -08:00
..
ai21/chat Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
anthropic Litellm dev 12 12 2024 (#7203) 2024-12-13 08:54:03 -08:00
azure Litellm dev 12 12 2024 (#7203) 2024-12-13 08:54:03 -08:00
azure_ai (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
base_llm Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
bedrock Litellm dev 12 11 2024 v2 (#7215) 2024-12-13 12:49:57 -08:00
cerebras Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
clarifai (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
cloudflare/chat Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
codestral/completion (Refactor) Code Quality improvement - rename text_completion_codestral.py -> codestral/completion/ (#7172) 2024-12-11 00:55:47 -08:00
cohere (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
custom_httpx fix(acompletion): support fallbacks on acompletion (#7184) 2024-12-11 19:20:54 -08:00
databricks (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
deepinfra/chat Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
deepseek/chat (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
deprecated_providers Code Quality Improvement - move aleph_alpha to deprecated_providers (#7168) 2024-12-11 00:50:40 -08:00
empower/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
fireworks_ai rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
friendliai/chat refactor(fireworks_ai/): inherit from openai like base config (#7146) 2024-12-10 16:15:19 -08:00
galadriel/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
github/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
groq rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
hosted_vllm/chat rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
huggingface fix hf failing streaming test 2024-12-12 10:48:00 -08:00
jina_ai LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
lm_studio rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
mistral (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
nlp_cloud (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
nvidia_nim Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
ollama (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
oobabooga (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
openai (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
openai_like Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
openrouter/chat Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
perplexity/chat rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
petals build: Squashed commit of https://github.com/BerriAI/litellm/pull/7171 2024-12-11 01:10:12 -08:00
predibase fix - handle merge conflicts 2024-12-11 01:06:40 -08:00
replicate Litellm dev 12 12 2024 (#7203) 2024-12-13 08:54:03 -08:00
sagemaker (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
sambanova Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
together_ai rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
triton fix merge conflicts 2024-12-11 01:08:43 -08:00
vertex_ai fix test_vertexai_model_garden_model_completion 2024-12-11 12:07:32 -08:00
vllm/completion (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
watsonx Litellm dev 12 13 2024 p1 (#7219) 2024-12-13 19:01:28 -08:00
xai/chat rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
base.py LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
baseten.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
custom_llm.py (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
maritalk.py Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
ollama_chat.py Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
volcengine.py Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.