litellm-mirror/litellm/llms
Krish Dholakia 89fcd7b0e1
Litellm 12 02 2024 (#6994)
* add the logprobs param for fireworks ai (#6915)

* add the logprobs param for fireworks ai

* (feat) pass through llm endpoints - add `PATCH` support (vertex context caching requires for update ops)  (#6924)

* add PATCH for pass through endpoints

* test_pass_through_routes_support_all_methods

* sonnet supports pdf, haiku does not (#6928)

* (feat) DataDog Logger - Add Failure logging + use Standard Logging payload (#6929)

* add async_log_failure_event for dd

* use standard logging payload for DD logging

* use standard logging payload for DD

* fix use SLP status

* allow opting into _create_v0_logging_payload

* add unit tests for DD logging payload

* fix dd logging tests

* (feat) log proxy auth errors on datadog  (#6931)

* add new dd type for auth errors

* add async_log_proxy_authentication_errors

* fix comment

* use async_log_proxy_authentication_errors

* test_datadog_post_call_failure_hook

* test_async_log_proxy_authentication_errors

* (feat) Allow using include to include external YAML files in a config.yaml (#6922)

* add helper to process inlcudes directive on yaml

* add doc on config management

* unit tests for `include` on config.yaml

* bump: version 1.52.16 → 1.53.

* (feat) dd logger - set tags according to the values set by those env vars  (#6933)

* dd logger, inherit from .envs

* test_datadog_payload_environment_variables

* fix _get_datadog_service

* build(ui/): update ui build

* bump: version 1.53.0 → 1.53.1

* Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)"

This reverts commit 68e59824a3.

* LiteLLM Minor Fixes & Improvements (11/26/2024)  (#6913)

* docs(config_settings.md): document all router_settings

* ci(config.yml): add router_settings doc test to ci/cd

* test: debug test on ci/cd

* test: debug ci/cd test

* test: fix test

* fix(team_endpoints.py): skip invalid team object. don't fail `/team/list` call

Causes downstream errors if ui just fails to load team list

* test(base_llm_unit_tests.py): add 'response_format={"type": "text"}' test to base_llm_unit_tests

adds complete coverage for all 'response_format' values to ci/cd

* feat(router.py): support wildcard routes in `get_router_model_info()`

Addresses https://github.com/BerriAI/litellm/issues/6914

* build(model_prices_and_context_window.json): add tpm/rpm limits for all gemini models

Allows for ratelimit tracking for gemini models even with wildcard routing enabled

Addresses https://github.com/BerriAI/litellm/issues/6914

* feat(router.py): add tpm/rpm tracking on success/failure to global_router

Addresses https://github.com/BerriAI/litellm/issues/6914

* feat(router.py): support wildcard routes on router.get_model_group_usage()

* fix(router.py): fix linting error

* fix(router.py): implement get_remaining_tokens_and_requests

Addresses https://github.com/BerriAI/litellm/issues/6914

* fix(router.py): fix linting errors

* test: fix test

* test: fix tests

* docs(config_settings.md): add missing dd env vars to docs

* fix(router.py): check if hidden params is dict

* LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943)

* fix(http_parsing_utils.py): remove `ast.literal_eval()` from http utils

Security fix - https://huntr.com/bounties/96a32812-213c-4819-ba4e-36143d35e95b?token=bf414bbd77f8b346556e
64ab2dd9301ea44339910877ea50401c76f977e36cdd78272f5fb4ca852a88a7e832828aae1192df98680544ee24aa98f3cf6980d8
bab641a66b7ccbc02c0e7d4ddba2db4dbe7318889dc0098d8db2d639f345f574159814627bb084563bad472e2f990f825bff0878a9
e281e72c88b4bc5884d637d186c0d67c9987c57c3f0caf395aff07b89ad2b7220d1dd7d1b427fd2260b5f01090efce5250f8b56ea2
c0ec19916c24b23825d85ce119911275944c840a1340d69e23ca6a462da610

* fix(converse/transformation.py): support bedrock apac cross region inference

Fixes https://github.com/BerriAI/litellm/issues/6905

* fix(user_api_key_auth.py): add auth check for websocket endpoint

Fixes https://github.com/BerriAI/litellm/issues/6926

* fix(user_api_key_auth.py): use `model` from query param

* fix: fix linting error

* test: run flaky tests first

* docs: update the docs (#6923)

* (bug fix) /key/update was not storing `budget_duration` in the DB  (#6941)

* fix - store budget_duration for keys

* test_generate_and_update_key

* test_update_user_unit_test

* fix user update

* (fix) handle json decode errors for DD exception logging (#6934)

* fix JSONDecodeError

* handle async_log_proxy_authentication_errors

* fix test_async_log_proxy_authentication_errors_get_request

* Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)""

This reverts commit 5d13302e6b.

* (docs + fix) Add docs on Moderations endpoint, Text Completion  (#6947)

* fix _pass_through_moderation_endpoint_factory

* fix route_llm_request

* doc moderations api

* docs on /moderations

* add e2e tests for moderations api

* docs moderations api

* test_pass_through_moderation_endpoint_factory

* docs text completion

* (feat) add enforcement for unique key aliases on /key/update and /key/generate  (#6944)

* add enforcement for unique key aliases

* fix _enforce_unique_key_alias

* fix _enforce_unique_key_alias

* fix _enforce_unique_key_alias

* test_enforce_unique_key_alias

* (fix) tag merging / aggregation logic   (#6932)

* use 1 helper to merge tags + ensure unique ness

* test_add_litellm_data_to_request_duplicate_tags

* fix _merge_tags

* fix proxy utils test

* fix doc string

* (feat) Allow disabling ErrorLogs written to the DB  (#6940)

* fix - allow disabling logging error logs

* docs on disabling error logs

* doc string for _PROXY_failure_handler

* test_disable_error_logs

* rename file

* fix rename file

* increase test coverage for test_enable_error_logs

* fix(key_management_endpoints.py): support 'tags' param on `/key/update` (#6945)

* LiteLLM Minor Fixes & Improvements (11/29/2024)  (#6965)

* fix(factory.py): ensure tool call converts image url

Fixes https://github.com/BerriAI/litellm/issues/6953

* fix(transformation.py): support mp4 + pdf url's for vertex ai

Fixes https://github.com/BerriAI/litellm/issues/6936

* fix(http_handler.py): mask gemini api key in error logs

Fixes https://github.com/BerriAI/litellm/issues/6963

* docs(prometheus.md): update prometheus FAQs

* feat(auth_checks.py): ensure specific model access > wildcard model access

if wildcard model is in access group, but specific model is not - deny access

* fix(auth_checks.py): handle auth checks for team based model access groups

handles scenario where model access group used for wildcard models

* fix(internal_user_endpoints.py): support adding guardrails on `/user/update`

Fixes https://github.com/BerriAI/litellm/issues/6942

* fix(key_management_endpoints.py): fix prepare_metadata_fields helper

* fix: fix tests

* build(requirements.txt): bump openai dep version

fixes proxies argument

* test: fix tests

* fix(http_handler.py): fix error message masking

* fix(bedrock_guardrails.py): pass in prepped data

* test: fix test

* test: fix nvidia nim test

* fix(http_handler.py): return original response headers

* fix: revert maskedhttpstatuserror

* test: update tests

* test: cleanup test

* fix(key_management_endpoints.py): fix metadata field update logic

* fix(key_management_endpoints.py): maintain initial order of guardrails in key update

* fix(key_management_endpoints.py): handle prepare metadata

* fix: fix linting errors

* fix: fix linting errors

* fix: fix linting errors

* fix: fix key management errors

* fix(key_management_endpoints.py): update metadata

* test: update test

* refactor: add more debug statements

* test: skip flaky test

* test: fix test

* fix: fix test

* fix: fix update metadata logic

* fix: fix test

* ci(config.yml): change db url for e2e ui testing

* bump: version 1.53.1 → 1.53.2

* Updated config.yml

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Sara Han <127759186+sdiazlor@users.noreply.github.com>

* fix(exceptions.py): ensure ratelimit error code == 429, type == "throttling_error"

Fixes https://github.com/BerriAI/litellm/pull/6973

* fix(utils.py): add jina ai dimensions embedding param support

Fixes https://github.com/BerriAI/litellm/issues/6591

* fix(exception_mapping_utils.py): add bedrock 'prompt is too long' exception to context window exceeded error exception mapping

Fixes https://github.com/BerriAI/litellm/issues/6629

Closes https://github.com/BerriAI/litellm/pull/6975

* fix(litellm_logging.py): strip trailing slash for api base

Closes https://github.com/BerriAI/litellm/pull/6859

* test: skip timeout issue

---------

Co-authored-by: ershang-dou <erlie.shang@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Sara Han <127759186+sdiazlor@users.noreply.github.com>
2024-12-02 22:00:01 -08:00
..
AI21 Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
anthropic (feat) Add usage tracking for streaming /anthropic passthrough routes (#6842) 2024-11-21 19:36:03 -08:00
azure_ai LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
AzureOpenAI (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
bedrock LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) 2024-11-28 00:32:46 +05:30
cerebras [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
cohere Litellm dev 11 30 2024 (#6974) 2024-12-02 21:03:33 -08:00
custom_httpx LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
databricks fix: trigger new build 2024-12-02 18:34:27 -08:00
deepseek/chat Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
files_apis Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
fine_tuning_apis (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
fireworks_ai Litellm 12 02 2024 (#6994) 2024-12-02 22:00:01 -08:00
groq Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
hosted_vllm/chat feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
huggingface_llms_metadata add hf tgi and conversational models 2023-09-27 15:56:45 -07:00
jina_ai LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
lm_studio Litellm lm studio embedding params (#6746) 2024-11-19 09:54:50 +05:30
mistral (fix) OpenAI's optional messages[].name does not work with Mistral API (#6701) 2024-11-11 18:03:41 -08:00
nvidia_nim (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
OpenAI (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
openai_like (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
perplexity/chat feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
prompt_templates LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
sagemaker (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
sambanova sambanova support (#5547) (#5703) 2024-09-14 17:23:04 -07:00
together_ai LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) 2024-11-15 11:18:31 +05:30
tokenizers feat(utils.py): bump tiktoken dependency to 0.7.0 2024-06-10 21:21:23 -07:00
vertex_ai_and_google_ai_studio LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
watsonx (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
xai/chat (feat) add XAI ChatCompletion Support (#6373) 2024-11-01 20:37:09 +05:30
__init__.py add linting 2023-08-18 11:05:05 -07:00
aleph_alpha.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
azure_text.py (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
base.py LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
base_aws_llm.py add bedrock image gen async support 2024-11-08 13:17:43 -08:00
baseten.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
clarifai.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
cloudflare.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
custom_llm.py LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158) 2024-10-11 23:04:36 -07:00
gemini.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
huggingface_restapi.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
maritalk.py Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
nlp_cloud.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
ollama.py (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
ollama_chat.py (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
oobabooga.py Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
openrouter.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
palm.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
petals.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
predibase.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
replicate.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
text_completion_codestral.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
triton.py (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
vllm.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
volcengine.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.