Commit graph

18754 commits

Author SHA1 Message Date
Krrish Dholakia
df9f52a933 bump: version 1.55.8 → 1.55.9 2024-12-20 21:25:50 -08:00
Krish Dholakia
4f3ddebf81 Litellm dev 2024 12 20 p1 (#7335)
* fix(utils.py): e2e azure tts cost tracking working

moves tts response obj to include hidden params (allows for litellm call id, etc. to be sent in response headers) ; fixes spend_Tracking_utils logging payload to account for non-base model use-case

Fixes https://github.com/BerriAI/litellm/issues/7223

* fix: fix linting errors

* build(model_prices_and_context_window.json): add bedrock llama 3.3

Closes https://github.com/BerriAI/litellm/issues/7329

* fix(openai.py): fix return type for sync openai httpx response

* test: update test

* fix(spend_tracking_utils.py): fix if check

* fix(spend_tracking_utils.py): fix if check

* test: improve debugging for test

* fix: fix import
2024-12-20 21:22:31 -08:00
Krish Dholakia
61b4c41c3c Litellm dev 12 20 2024 p3 (#7339)
* fix(proxy_track_cost_callback.py): log to db if only end user param given

* fix: allows for jwt-auth based end user id spend tracking to work

* fix(utils.py): fix 'get_end_user_id_for_cost_tracking' to use 'user_api_key_end_user_id'

more stable - works with jwt-auth based end user tracking as well

* test(test_jwt.py): add e2e unit test to confirm end user cost tracking works for spend logs

* test: update test to use end_user api key hash param

* fix(langfuse.py): support end user cost tracking via jwt auth + langfuse

logs end user to langfuse if decoded from jwt token

* fix: fix linting errors

* test: fix test

* test: fix test

* fix: fix end user id extraction

* fix: run test earlier
2024-12-20 21:13:32 -08:00
Ishaan Jaff
1b2ed0c344 [Bug fix ]: Triton /infer handler incompatible with batch responses (#7337)
* migrate triton to base llm http handler

* clean up triton handler.py

* use transform functions for triton

* add TritonConfig

* get openai params for triton

* use triton embedding config

* test_completion_triton_generate_api

* test_completion_triton_infer_api

* fix TritonConfig doc string

* use TritonResponseIterator

* fix triton embeddings

* docs triton chat usage
2024-12-20 20:59:40 -08:00
Krish Dholakia
e6bdec4eed Controll fallback prompts client-side (#7334)
* feat(router.py): support passing model-specific messages in fallbacks

* docs(routing.md): separate router timeouts into separate doc

allow for 1 fallbacks doc (across proxy/router)

* docs(routing.md): cleanup router docs

* docs(reliability.md): cleanup docs

* docs(reliability.md): cleaned up fallback doc

just have 1 doc across sdk/proxy

simplifies docs

* docs(reliability.md): add setting model-specific fallback prompts

* fix: fix linting errors

* test: skip test causing openai rate limit errros

* test: fix test

* test: run vertex test first to catch error
2024-12-20 19:09:53 -08:00
Krrish Dholakia
495b009a22 fix pyproject to v1.55.8 2024-12-20 15:38:26 -08:00
jravi-fireworks
9d84e541b3 Fix LiteLLM documentation (#7333)
Co-authored-by: Jetashree Ravi <jetashreeravi@Jetashrees-MBP.attlocal.net>
2024-12-20 15:04:23 -08:00
Krish Dholakia
b026230b0a Litellm dev 2024 12 19 p3 (#7322)
* fix(utils.py): remove unsupported optional params (if drop_params=True) before passing into map openai params

Fixes https://github.com/BerriAI/litellm/issues/7242

* test: new test for langfuse prompt management hook

Addresses https://github.com/BerriAI/litellm/issues/3893#issuecomment-2549080296

* feat(main.py): add 'get_chat_completion_prompt' customlogger hook

allows for langfuse prompt management

Addresses https://github.com/BerriAI/litellm/issues/3893#issuecomment-2549080296

* feat(langfuse_prompt_management.py): working e2e langfuse prompt management

works with `langfuse/` route

* feat(main.py): initial tracing for dynamic langfuse params

allows admin to specify langfuse keys by model in model_list

* feat(main.py): support passing langfuse credentials dynamically

* fix(langfuse_prompt_management.py): create langfuse client based on dynamic callback params

allows dynamic langfuse params to work

* fix: fix linting errors

* docs(prompt_management.md): refactor docs for sdk + proxy prompt management tutorial

* docs(prompt_management.md): cleanup doc

* docs: cleanup topnav

* docs(prompt_management.md): update docs to be easier to use

* fix: remove unused imports

* docs(prompt_management.md): add architectural overview doc

* fix(litellm_logging.py): fix dynamic param passing

* fix(langfuse_prompt_management.py): fix linting errors

* fix: fix linting errors

* fix: use typing_extensions for typealias to ensure python3.8 compatibility

* test: use stream_options in test to account for tiktoken diff

* fix: improve import error message, and check run test earlier
2024-12-20 13:30:16 -08:00
Ishaan Jaff
205e2dbe3c fix linting errors 2024-12-20 12:41:23 -08:00
Krrish Dholakia
72172ea15c docs: refactor admin ui docs 2024-12-20 10:20:07 -08:00
Krrish Dholakia
147fc641cd bump: version 1.55.7 → 1.55.8 2024-12-19 22:37:36 -08:00
Krish Dholakia
e8d2cb4935 Litellm dev 12 19 2024 p2 (#7315)
* fix(proxy_server.py): only update k,v pair if v is not empty/null

Fixes https://github.com/BerriAI/litellm/issues/6787

* test(test_router.py): cleanup duplicate calls

* test: add new test stream options drop params test

* test: update optional params / stream options test to test for vertex ai mistral route specifically

Addresses https://github.com/BerriAI/litellm/issues/7309

* fix(proxy_server.py): fix linting errors

* fix: fix linting errors
2024-12-19 20:28:16 -08:00
Ishaan Jaff
ad7dfd2edb docs infinity rerank api docs 2024-12-19 18:51:55 -08:00
Ishaan Jaff
db4cde9d9c docs base rerank config 2024-12-19 18:33:32 -08:00
Ishaan Jaff
8f8499908c docs add ref prs 2024-12-19 18:31:38 -08:00
Ishaan Jaff
6641e75e0c (feat) add infinity rerank models (#7321)
* Support Infinity Reranker (custom reranking models) (#7247)

* Support Infinity Reranker

* Clean code

* Included transformation.py

* Clean code

* Added Infinity reranker test

* Clean code

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* transform_rerank_response

* update handler.py

* infinity rerank updates

* ci/cd run again

* add infinity unit tests

* docs add instruction on how to add a new provider for rerank

---------

Co-authored-by: Hao Shan <53949959+haoshan98@users.noreply.github.com>
2024-12-19 18:30:28 -08:00
Krrish Dholakia
50204d9a6d build(reset_stable.yml): rename branch to 'litellm_stable_release_branch'
use this branch to trigger load test / other workflows for stable releases
2024-12-19 17:43:37 -08:00
Krrish Dholakia
6eaa3ff4e3 build(reset_stable.yml): add new workflow to reset litellm_stable to latest release 2024-12-19 17:36:58 -08:00
Ishaan Jaff
b0738fd439 (code refactor) - Add BaseRerankConfig. Use BaseRerankConfig for cohere/rerank and azure_ai/rerank (#7319)
* add base rerank config

* working sync cohere rerank

* update rerank types

* update base rerank config

* remove old rerank

* add new cohere handler.py

* add cohere rerank transform

* add get_provider_rerank_config

* add rerank to base llm http handler

* add rerank utils

* add arerank to llm http handler.py

* add AzureAIRerankConfig

* updates rerank config

* update test rerank

* fix unused imports

* update get_provider_rerank_config

* test_basic_rerank_caching

* fix unused import

* test rerank
2024-12-19 17:03:34 -08:00
Ishaan Jaff
51f3fc65f7 [Bug Fix]: ImportError: cannot import name 'T' from 're' (#7314)
* fix unused imports

* add test for python 3.12

* re introduce error - as a test

* update config for ci/cd

* fix python 13 install

* bump pyyaml

* bump numpy

* fix embedding requests

* bump pillow dep

* bump version

* bump pydantic

* bump tiktoken

* fix import

* fix python 3.13 import

* fix unused imports in tests/*
2024-12-19 13:09:30 -08:00
Ishaan Jaff
62a1cdec47 (code quality) run ruff rule to ban unused imports (#7313)
* remove unused imports

* fix AmazonConverseConfig

* fix test

* fix import

* ruff check fixes

* test fixes

* fix testing

* fix imports
2024-12-19 12:33:42 -08:00
Krrish Dholakia
7e01805caa bump: version 1.55.6 → 1.55.7 2024-12-19 11:22:44 -08:00
Krish Dholakia
3183b7e6a1 o1 - add image param handling (#7312)
* fix(openai.py): fix returning o1 non-streaming requests

fixes issue where fake stream always true for o1

* build(model_prices_and_context_window.json): add 'supports_vision' for o1 models

* fix: add internal server error exception mapping

* fix(base_llm_unit_tests.py): drop temperature from test

* test: mark prompt caching as a flaky test
2024-12-19 11:22:25 -08:00
Ishaan Jaff
a58dd974e8 bump: version 1.55.5 → 1.55.6 2024-12-18 19:47:47 -08:00
Ishaan Jaff
acffdab966 fix use 1 file _PROXY_track_cost_callback (#7304) 2024-12-18 19:46:26 -08:00
Ishaan Jaff
087ab41d66 (proxy admin ui) - show Teams sorted by Team Alias (#7296)
* ui - sort teams by team alias

* test assert /user/info returns teams in a sorted order

* fix team_alias check on team
2024-12-18 19:43:19 -08:00
Ishaan Jaff
6220e17ebf (feat proxy) v2 - model max budgets (#7302)
* clean up unused code

* add _PROXY_VirtualKeyModelMaxBudgetLimiter

* adjust type imports

* working _PROXY_VirtualKeyModelMaxBudgetLimiter

* fix user_api_key_model_max_budget

* fix user_api_key_model_max_budget

* update naming

* update naming

* fix changes to RouterBudgetLimiting

* test_call_with_key_over_model_budget

* test_call_with_key_over_model_budget

* handle _get_request_model_budget_config

* e2e test for test_call_with_key_over_model_budget

* clean up test

* run ci/cd again

* add validate_model_max_budget

* docs fix

* update doc

* add e2e testing for _PROXY_VirtualKeyModelMaxBudgetLimiter

* test_unit_test_max_model_budget_limiter.py
2024-12-18 19:42:46 -08:00
Krish Dholakia
1a4910f6c0 fix(health.md): add rerank model health check information (#7295)
* fix(health.md): add rerank model health check information

* build(model_prices_and_context_window.json): add gemini 2.0 for google ai studio - pricing + commercial rate limits

* build(model_prices_and_context_window.json): add gemini-2.0 supports audio output = true

* docs(team_model_add.md): clarify allowing teams to add models is an enterprise feature

* fix(o1_transformation.py): add support for 'n', 'response_format' and 'stop' params for o1 and 'stream_options' param for o1-mini

* build(model_prices_and_context_window.json): add 'supports_system_message' to supporting openai models

needed as o1-preview, and o1-mini models don't support 'system message

* fix(o1_transformation.py): translate system message based on if o1 model supports it

* fix(o1_transformation.py): return 'stream' param support if o1-mini/o1-preview

o1 currently doesn't support streaming, but the other model versions do

Fixes https://github.com/BerriAI/litellm/issues/7292

* fix(o1_transformation.py): return tool calling/response_format in supported params if model map says so

Fixes https://github.com/BerriAI/litellm/issues/7292

* fix: fix linting errors

* fix: update '_transform_messages'

* fix(o1_transformation.py): fix provider passed for supported param checks

* test(base_llm_unit_tests.py): skip test if api takes >5s to respond

* fix(utils.py): return false in 'supports_factory' if can't find value

* fix(o1_transformation.py): always return stream + stream_options as supported params + handle stream options being passed in for azure o1

* feat(openai.py): support stream faking natively in openai handler

Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview

 Fixes https://github.com/BerriAI/litellm/issues/7292

* fix(openai.py): use inference param instead of original optional param
2024-12-18 19:18:10 -08:00
Krish Dholakia
e95820367f fix(hosted_vllm/transformation.py): return fake api key, if none give… (#7301)
* fix(hosted_vllm/transformation.py): return fake api key, if none give. Prevents httpx error

Fixes https://github.com/BerriAI/litellm/issues/7291

* test: fix test

* fix(main.py): add hosted_vllm/ support for embeddings endpoint

Closes https://github.com/BerriAI/litellm/issues/7290

* docs(vllm.md): add docs on vllm embeddings usage

* fix(__init__.py): fix sambanova model test

* fix(base_llm_unit_tests.py): skip pydantic obj test if model takes >5s to respond
2024-12-18 18:41:53 -08:00
Ishaan Jaff
70883bc1b8 (feat - proxy) Add status_code to litellm_proxy_total_requests_metric_total (#7293)
* fix _select_model_name_for_cost_calc docstring

* add STATUS_CODE  to prometheus

* test prometheus unit tests

* test_prometheus_unit_tests.py

* update Proxy Level Tracking Metrics docs

* fix test_proxy_failure_metrics

* fix test_proxy_failure_metrics
2024-12-18 15:55:02 -08:00
Ishaan Jaff
225e0581a7 Replace deprecated Pydantic Config class with model_config BerriAI/litellm#6902 (#6903) (#7300)
Co-authored-by: Dan Siwiec <daniel.siwiec@gmail.com>
2024-12-18 15:30:08 -08:00
Krish Dholakia
050499ec8f Litellm dev readd prompt caching (#7299)
* fix(router.py): re-add saving model id on prompt caching valid successful deployment

* fix(router.py): introduce optional pre_call_checks

isolate prompt caching logic in a separate file

* fix(prompt_caching_deployment_check.py): fix import

* fix(router.py): new 'async_filter_deployments' event hook

allows custom logger to filter deployments returned to routing strategy

* feat(prompt_caching_deployment_check.py): initial working commit of prompt caching based routing

* fix(cooldown_callbacks.py): fix linting error

* fix(budget_limiter.py): move budget logger to async_filter_deployment hook

* test: add unit test

* test(test_router_helper_utils.py): add unit testing

* fix(budget_limiter.py): fix linting errors

* docs(config_settings.md): add 'optional_pre_call_checks' to router_settings param docs
2024-12-18 15:13:49 -08:00
Rodrigo Maldonado
f3ab2e979d added sambanova cloud models (#7187)
Co-authored-by: Rodrigo Maldonado <rodrigo.maldonado@sambanovasystems.com>
2024-12-18 11:20:38 -08:00
Ishaan Jaff
bce3ee7eda update docs 2024-12-18 11:16:30 -08:00
Ishaan Jaff
533381d4ad tag budgets fixes 2024-12-18 10:28:37 -08:00
Ishaan Jaff
1da99a8adf fix _select_model_name_for_cost_calc docstring 2024-12-18 09:39:31 -08:00
Krish Dholakia
2c5cf471f0 Litellm security fixes (#7282)
* build(Dockerfile): bump node version

* build(Dockerfile): bump python version

fix critical errors

* build(requirements.txt): fix snyk errors
2024-12-18 09:38:52 -08:00
Krish Dholakia
e7918f097b fix(proxy_server.py): pass model access groups to get_key/get_team mo… (#7281)
* fix(proxy_server.py): pass model access groups to get_key/get_team models

allows end user to see actual models they have access to, instead of default models

* fix(auth_checks.py): fix linting errors

* fix: fix linting errors
2024-12-18 09:33:33 -08:00
Krrish Dholakia
c7ff5a53d7 build: bump version 2024-12-18 09:21:13 -08:00
Ishaan Jaff
c7b288ce30 (fix) unable to pass input_type parameter to Voyage AI embedding mode (#7276)
* VoyageEmbeddingConfig

* fix voyage logic to get params

* add voyage embedding transformation

* add get_provider_embedding_config

* use BaseEmbeddingConfig

* voyage clean up

* use llm http handler for embedding transformations

* test_voyage_ai_embedding_extra_params

* add voyage async

* test_voyage_ai_embedding_extra_params

* add async for llm http handler

* update BaseLLMEmbeddingTest

* test_voyage_ai_embedding_extra_params

* fix linting

* fix get_provider_embedding_config

* fix anthropic text test

* update location of base/chat/transformation

* fix import path

* fix IBMWatsonXAIConfig
2024-12-17 19:23:49 -08:00
Emerson Gomes
63172e67f2 Correct max_tokens on Model DB (#7284)
Corrects several instances of `max_tokens` and `max_output_tokens` for Azure AI models
2024-12-17 17:56:38 -08:00
Ishaan Jaff
1484779cb6 (feat) proxy Azure Blob Storage - Add support for AZURE_STORAGE_ACCOUNT_KEY Auth (#7280)
* add upload_to_azure_data_lake_with_azure_account_key

* async_upload_payload_to_azure_blob_storage

* docs add AZURE_STORAGE_ACCOUNT_KEY

* add azure-storage-file-datalake
2024-12-17 17:35:45 -08:00
Emerson Gomes
f135e092dc Add Azure Llama 3.3 (#7283)
Adds model DB entry for Azure Llama 3.3
2024-12-17 17:26:51 -08:00
Krish Dholakia
f966e279a6 LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263)
* fix(factory.py): skip empty text blocks for bedrock user messages

Fixes https://github.com/BerriAI/litellm/issues/7169

* Add support for Gemini 2.0 GoogleSearch tool (#7257)

* Add support for google_search tool in gemini 2.0

* Add/modify tests

* Fix grounding check

* Remove 2.0 grounding test; exclude experimental model in VERTEX_MODELS_TO_NOT_TEST

* Swap order of tools

* DFix formatting

* fix(get_api_base.py): return api base in streaming response

Fixes https://github.com/BerriAI/litellm/issues/7249

Closes https://github.com/BerriAI/litellm/pull/7250

* fix(cost_calculator.py): only set base model to model if not none

Fixes https://github.com/BerriAI/litellm/issues/7223

* fix(cost_calculator.py): enforce stricter order when picking model for cost calculation

* fix(cost_calculator.py): fix '_select_model_name_for_cost_calc' to return model name with region name prefix if provided

* fix(utils.py): fix 'get_model_info()' to handle edge case where model name starts with custom llm provider AND custom llm provider is given

* fix(cost_calculator.py): handle `custom_llm_provider-` scenario

* fix(cost_calculator.py): e2e working tts cost tracking

ensures initial message is passed in, to cost calculator

* fix(factory.py): suppress linting errors

* fix(cost_calculator.py): strip llm provider from model name after selecting cost calc model

* fix(litellm_logging.py): store initial request in 'input' field + accept base_model to be passed in litellm_params directly

* test: handle none env var value in flaky test

* fix(litellm_logging.py): fix linting errors

---------

Co-authored-by: Sam B <samlingx@gmail.com>
2024-12-17 15:33:36 -08:00
Krish Dholakia
7f18a82f72 Litellm dev 12 17 2024 p3 (#7279)
* build(model_prices_and_context_window.json): add new o1 model

* build(model_prices_and_context_window.json): add new gpt-4o-realtime preview models

* build(model_prices_and_context_window.json): add dec 17th gpt-4o realtime preview models

* build(model_prices_and_context_window.json): update gpt-4o model audio token pricing + add audio token cached cost

* build(model_prices_and_context_window.json): add new 'gpt-4o-mini-audio-preview-2024-12-17' model pricing

* build(model_prices_and_context_window.json): add 'supports_prompt_caching' for openai finetune models + cache read cost

* build(model_prices_and_context_window.json): add new o1 models
2024-12-17 15:06:29 -08:00
Krish Dholakia
57809cfbf4 Litellm dev 12 17 2024 p2 (#7277)
* fix(openai/transcription/handler.py): call 'log_pre_api_call' on async calls

* fix(openai/transcriptions/handler.py): call 'logging.pre_call' on sync whisper calls as well

* fix(proxy_cli.py): remove default proxy_cli timeout param

gets passed in as a dynamic request timeout and overrides config values

* fix(langfuse.py): pass litellm httpx client - contains ssl certs (#7052)

Fixes https://github.com/BerriAI/litellm/issues/7046
2024-12-17 14:05:14 -08:00
Krish Dholakia
03e711e3e4 LITELLM: Remove requests library usage (#7235)
* fix(generic_api_callback.py): remove requests lib usage

* fix(budget_manager.py): remove requests lib usgae

* fix(main.py): cleanup requests lib usage

* fix(utils.py): remove requests lib usage

* fix(argilla.py): fix argilla test

* fix(athina.py): replace 'requests' lib usage with litellm module

* fix(greenscale.py): replace 'requests' lib usage with httpx

* fix: remove unused 'requests' lib import + replace usage in some places

* fix(prompt_layer.py): remove 'requests' lib usage from prompt layer

* fix(ollama_chat.py): remove 'requests' lib usage

* fix(baseten.py): replace 'requests' lib usage

* fix(codestral/): replace 'requests' lib usage

* fix(predibase/): replace 'requests' lib usage

* refactor: cleanup unused 'requests' lib imports

* fix(oobabooga.py): cleanup 'requests' lib usage

* fix(invoke_handler.py): remove unused 'requests' lib usage

* refactor: cleanup unused 'requests' lib import

* fix: fix linting errors

* refactor(ollama/): move ollama to using base llm http handler

removes 'requests' lib dep for ollama integration

* fix(ollama_chat.py): fix linting errors

* fix(ollama/completion/transformation.py): convert non-jpeg/png image to jpeg/png before passing to ollama
2024-12-17 12:50:04 -08:00
Krish Dholakia
f628290ce7 fix(utils.py): fix openai-like api response format parsing (#7273)
* fix(utils.py): fix openai-like api response format parsing

Fixes issue passing structured output to litellm_proxy/ route

* fix(cost_calculator.py): fix whisper transcription cost calc to use file duration, not response time

'

* test: skip test if credentials not found
2024-12-17 12:49:09 -08:00
Krrish Dholakia
8212af0ac1 bump: version 1.55.3 → 1.55.4 2024-12-17 08:52:22 -08:00
Krish Dholakia
56491b31d7 docs(input.md): document 'extra_headers' param support (#7268)
* docs(input.md): document 'extra_headers' param support

* fix: #7239 to move Nova topK parameter to `additionalModelRequestFields` (#7240)

Co-authored-by: Ryan Hoium <rhoium>

---------

Co-authored-by: ryanh-ai <3118399+ryanh-ai@users.noreply.github.com>
2024-12-17 07:19:14 -08:00