litellm/litellm/llms
Krish Dholakia 5c33d1c9af
Litellm Minor Fixes & Improvements (10/03/2024) (#6049)
* fix(proxy_server.py): remove spendlog fixes from proxy startup logic

Moves  https://github.com/BerriAI/litellm/pull/4794 to `/db_scripts` and cleans up some caching-related debug info (easier to trace debug logs)

* fix(langfuse_endpoints.py): Fixes https://github.com/BerriAI/litellm/issues/6041

* fix(azure.py): fix health checks for azure audio transcription models

Fixes https://github.com/BerriAI/litellm/issues/5999

* Feat: Add Literal AI Integration (#5653)

* feat: add Literal AI integration

* update readme

* Update README.md

* fix: address comments

* fix: remove literalai sdk

* fix: use HTTPHandler

* chore: add test

* fix: add asyncio lock

* fix(literal_ai.py): fix linting errors

* fix(literal_ai.py): fix linting errors

* refactor: cleanup

---------

Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
2024-10-03 18:02:28 -04:00
..
AI21 Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
anthropic LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
azure_ai LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
AzureOpenAI Litellm Minor Fixes & Improvements (10/03/2024) (#6049) 2024-10-03 18:02:28 -04:00
bedrock LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
cerebras [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
cohere Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
custom_httpx Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
databricks Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
files_apis Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
fine_tuning_apis Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
fireworks_ai LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842) (#5858) 2024-09-24 15:01:31 -07:00
groq LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
huggingface_llms_metadata add hf tgi and conversational models 2023-09-27 15:56:45 -07:00
mistral LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
nvidia_nim (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
OpenAI OpenAI /v1/realtime api support (#6047) 2024-10-03 17:11:22 -04:00
prompt_templates fix(factory.py): bedrock: merge consecutive tool + user messages (#6028) 2024-10-03 09:16:25 -04:00
sagemaker Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
sambanova sambanova support (#5547) (#5703) 2024-09-14 17:23:04 -07:00
together_ai LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
tokenizers feat(utils.py): bump tiktoken dependency to 0.7.0 2024-06-10 21:21:23 -07:00
vertex_ai_and_google_ai_studio Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
aleph_alpha.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
azure_text.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
base.py LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
base_aws_llm.py Litellm stable dev (#5711) 2024-09-14 23:22:59 -07:00
baseten.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
clarifai.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
cloudflare.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
custom_llm.py fix(custom_llm.py): pass input params to custom llm 2024-07-25 19:03:52 -07:00
gemini.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
huggingface_restapi.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
maritalk.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
nlp_cloud.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
ollama.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
ollama_chat.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
oobabooga.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
openrouter.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
palm.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
petals.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
predibase.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
replicate.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
text_completion_codestral.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
triton.py Fix not sended json_data_for_triton 2024-08-14 09:57:48 +07:00
vllm.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
volcengine.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
watsonx.py fix: move to using pydantic obj for setting values 2024-07-11 13:18:36 -07:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.