litellm-mirror/litellm/llms
Krish Dholakia 859b47f08b
LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965)
* fix(factory.py): ensure tool call converts image url

Fixes https://github.com/BerriAI/litellm/issues/6953

* fix(transformation.py): support mp4 + pdf url's for vertex ai

Fixes https://github.com/BerriAI/litellm/issues/6936

* fix(http_handler.py): mask gemini api key in error logs

Fixes https://github.com/BerriAI/litellm/issues/6963

* docs(prometheus.md): update prometheus FAQs

* feat(auth_checks.py): ensure specific model access > wildcard model access

if wildcard model is in access group, but specific model is not - deny access

* fix(auth_checks.py): handle auth checks for team based model access groups

handles scenario where model access group used for wildcard models

* fix(internal_user_endpoints.py): support adding guardrails on `/user/update`

Fixes https://github.com/BerriAI/litellm/issues/6942

* fix(key_management_endpoints.py): fix prepare_metadata_fields helper

* fix: fix tests

* build(requirements.txt): bump openai dep version

fixes proxies argument

* test: fix tests

* fix(http_handler.py): fix error message masking

* fix(bedrock_guardrails.py): pass in prepped data

* test: fix test

* test: fix nvidia nim test

* fix(http_handler.py): return original response headers

* fix: revert maskedhttpstatuserror

* test: update tests

* test: cleanup test

* fix(key_management_endpoints.py): fix metadata field update logic

* fix(key_management_endpoints.py): maintain initial order of guardrails in key update

* fix(key_management_endpoints.py): handle prepare metadata

* fix: fix linting errors

* fix: fix linting errors

* fix: fix linting errors

* fix: fix key management errors

* fix(key_management_endpoints.py): update metadata

* test: update test

* refactor: add more debug statements

* test: skip flaky test

* test: fix test

* fix: fix test

* fix: fix update metadata logic

* fix: fix test

* ci(config.yml): change db url for e2e ui testing
2024-12-01 05:24:11 -08:00
..
AI21 Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
anthropic (feat) Add usage tracking for streaming /anthropic passthrough routes (#6842) 2024-11-21 19:36:03 -08:00
azure_ai LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
AzureOpenAI (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
bedrock LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) 2024-11-28 00:32:46 +05:30
cerebras [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
cohere LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
custom_httpx LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
databricks (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
deepseek/chat Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
files_apis Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
fine_tuning_apis (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
fireworks_ai feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
groq Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
hosted_vllm/chat feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
huggingface_llms_metadata add hf tgi and conversational models 2023-09-27 15:56:45 -07:00
jina_ai LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
lm_studio Litellm lm studio embedding params (#6746) 2024-11-19 09:54:50 +05:30
mistral (fix) OpenAI's optional messages[].name does not work with Mistral API (#6701) 2024-11-11 18:03:41 -08:00
nvidia_nim (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
OpenAI (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
openai_like (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
perplexity/chat feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
prompt_templates LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
sagemaker (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
sambanova sambanova support (#5547) (#5703) 2024-09-14 17:23:04 -07:00
together_ai LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) 2024-11-15 11:18:31 +05:30
tokenizers feat(utils.py): bump tiktoken dependency to 0.7.0 2024-06-10 21:21:23 -07:00
vertex_ai_and_google_ai_studio LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
watsonx (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
xai/chat (feat) add XAI ChatCompletion Support (#6373) 2024-11-01 20:37:09 +05:30
__init__.py add linting 2023-08-18 11:05:05 -07:00
aleph_alpha.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
azure_text.py (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
base.py LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
base_aws_llm.py add bedrock image gen async support 2024-11-08 13:17:43 -08:00
baseten.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
clarifai.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
cloudflare.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
custom_llm.py LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158) 2024-10-11 23:04:36 -07:00
gemini.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
huggingface_restapi.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
maritalk.py Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
nlp_cloud.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
ollama.py (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
ollama_chat.py (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
oobabooga.py Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
openrouter.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
palm.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
petals.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
predibase.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
replicate.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
text_completion_codestral.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
triton.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
vllm.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
volcengine.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.