Commit graph

18754 commits

Author SHA1 Message Date
Ishaan Jaff
427f2173d2 (feat) Add Bedrock knowledge base pass through endpoints (#7267)
* bugfix: Proxy Routing for Bedrock Knowledgebase URLs are incorrect (#7097)

* Fixing routing bug where bedrock knowledgebase urls were being generated incorrectly

* Preparing for PR

* Preparing for PR

* Preparing for PR

---------

Co-authored-by: Luke Birk <lb0737@att.com>

* fix _is_bedrock_agent_runtime_route

* docs - Query Knowledge Base

* test_is_bedrock_agent_runtime_route

* fix bedrock_proxy_route

---------

Co-authored-by: LBirk <2731718+LBirk@users.noreply.github.com>
Co-authored-by: Luke Birk <lb0737@att.com>
2024-12-16 22:19:34 -08:00
Ishaan Jaff
d891861c8e (feat) Add Azure Blob Storage Logging Integration (#7265)
* add path to http handler

* AzureBlobStorageLogger

* test_azure_blob_storage

* use constants for Azure storage

* use helper get_azure_ad_token_from_entrata_id

* azure blob storage support

* get_azure_ad_token_from_azure_storage

* fix import

* azure logging

* docs azure storage

* add docs on azure blobs

* add premium user check

* add azure_storage  as identified logging callback

* async_upload_payload_to_azure_blob_storage

* docs azure storage

* callback_class_str_to_classType
2024-12-16 22:18:22 -08:00
Ishaan Jaff
efed363ea1 docs add response format on main pages 2024-12-16 08:41:12 -08:00
Ishaan Jaff
38d65f11ac Update README.md 2024-12-16 08:36:57 -08:00
Ishaan Jaff
84aa0d632c ci/cd run again 2024-12-16 08:19:14 -08:00
Ishaan Jaff
58a75592eb docs update 2024-12-16 08:06:06 -08:00
Krrish Dholakia
c0d35aa8a7 bump: version 1.55.2 → 1.55.3 2024-12-14 23:03:12 -08:00
Krish Dholakia
194acfa95c Litellm dev 12 14 2024 p1 (#7231)
* fix(router.py): fix reading + using deployment-specific num retries on router

Fixes https://github.com/BerriAI/litellm/issues/7001

* fix(router.py): ensure 'timeout' in litellm_params overrides any value in router settings

Refactors all routes to use common '_update_kwargs_with_deployment' which has the timeout handling

* fix(router.py): fix timeout check
2024-12-14 22:22:29 -08:00
Ishaan Jaff
2459f9735d (feat) Add Tag-based budgets on litellm router / proxy (#7236)
* add BudgetConfig

* add _get_tags_from_request_kwargs

* test_tag_budgets_e2e_test_expect_to_fail

* add a check for request tags

* fix _async_get_cache_keys_for_router_budget_limiting

* fix test

* fix _sync_in_memory_spend_with_redis

* _async_get_cache_keys_for_router_budget_limiting

* fix _init_tag_budgets

* fix type casting

* docs show error for tag budget limit hit

* fix _get_tags_from_request_kwargs

* fix undo change
2024-12-14 17:28:36 -08:00
Ishaan Jaff
3fdd164fee ui new build 2024-12-14 17:15:31 -08:00
Krish Dholakia
edbf5eeeb3 Litellm remove circular imports (#7232)
* fix(utils.py): initial commit to remove circular imports - moves llmproviders to utils.py

* fix(router.py): fix 'litellm.EmbeddingResponse' import from router.py

'

* refactor: fix litellm.ModelResponse import on pass through endpoints

* refactor(litellm_logging.py): fix circular import for custom callbacks literal

* fix(factory.py): fix circular imports inside prompt factory

* fix(cost_calculator.py): fix circular import for 'litellm.Usage'

* fix(proxy_server.py): fix potential circular import with `litellm.Router'

* fix(proxy/utils.py): fix potential circular import in `litellm.Router`

* fix: remove circular imports in 'auth_checks' and 'guardrails/'

* fix(prompt_injection_detection.py): fix router impor t

* fix(vertex_passthrough_logging_handler.py): fix potential circular imports in vertex pass through

* fix(anthropic_pass_through_logging_handler.py): fix potential circular imports

* fix(slack_alerting.py-+-ollama_chat.py): fix modelresponse import

* fix(base.py): fix potential circular import

* fix(handler.py): fix potential circular ref in codestral + cohere handler's

* fix(azure.py): fix potential circular imports

* fix(gpt_transformation.py): fix modelresponse import

* fix(litellm_logging.py): add logging base class - simplify typing

makes it easy for other files to type check the logging obj without introducing circular imports

* fix(azure_ai/embed): fix potential circular import on handler.py

* fix(databricks/): fix potential circular imports in databricks/

* fix(vertex_ai/): fix potential circular imports on vertex ai embeddings

* fix(vertex_ai/image_gen): fix import

* fix(watsonx-+-bedrock): cleanup imports

* refactor(anthropic-pass-through-+-petals): cleanup imports

* refactor(huggingface/): cleanup imports

* fix(ollama-+-clarifai): cleanup circular imports

* fix(openai_like/): fix impor t

* fix(openai_like/): fix embedding handler

cleanup imports

* refactor(openai.py): cleanup imports

* fix(sagemaker/transformation.py): fix import

* ci(config.yml): add circular import test to ci/cd
2024-12-14 16:28:34 -08:00
David Manouchehri
8e4e763095 Add new Gemini 2.0 Flash model to Vertex AI. (#7193) 2024-12-14 15:59:43 -08:00
Ivan Vykopal
a091964dd4 Fix vllm import (#7224)
* fix: Fix vllm import

* Update handler.py
2024-12-14 15:57:49 -08:00
Ishaan Jaff
0038dc1762 ui fix tags getting proxy settings (#7234) 2024-12-14 15:33:03 -08:00
Ishaan Jaff
2f76c11d9c (code quality) Add ruff check to ban print in repo (#7233)
* fix ruff print check

* fix ruff check
2024-12-14 15:32:24 -08:00
Ishaan Jaff
a987a49595 ui new build 2024-12-14 14:16:15 -08:00
Ishaan Jaff
bbee38dd0e (UI) Fix Usage Tab - Don't make expensive UI queries after SpendLogs crosses 1M Rows (#7229)
* ui - fix viewing usage tab when 1M rows

* fix don't fetch tagSpend data if DISABLE_EXPENSIVE_DB_QUERIES is True
2024-12-14 14:14:34 -08:00
Ishaan Jaff
f235971819 (UI fix) - Allow editing Key Metadata (#7230)
* ui - fix allow editing key metadata

* fix initial values loading

* ui fix key table
2024-12-14 14:14:24 -08:00
Ishaan Jaff
a99e780e4c ui fix key table 2024-12-14 14:12:39 -08:00
Ishaan Jaff
9dddf8a749 ui - new build 2024-12-14 12:13:49 -08:00
Ishaan Jaff
73dcbf8d4e (proxy) - Auth fix, ensure re-using safe request body for checking model field (#7222)
* litellm fix auth check

* fix _read_request_body

* test_auth_with_form_data_and_model

* fix auth check

* fix _read_request_body

* fix _safe_get_request_headers
2024-12-14 12:01:25 -08:00
Krish Dholakia
d02b9a111a fix(main.py): fix retries being multiplied when using openai sdk (#7221)
* fix(main.py): fix retries being multiplied when using openai sdk

Closes https://github.com/BerriAI/litellm/pull/7130

* docs(prompt_management.md): add langfuse prompt management doc

* feat(team_endpoints.py): allow teams to add their own models

Enables teams to call their own finetuned models via the proxy

* test: add better enforcement check testing for `/model/new` now that teams can add their own models

* docs(team_model_add.md): tutorial for allowing teams to add their own models

* test: fix test
2024-12-14 11:56:55 -08:00
Ishaan Jaff
925d33aa9d Litellm add router to base llm testing (#7202)
* code qa add litellm router to base llm testing

* test_image_url

* fix img url

* fix add router to base llm class

* fix base llm testing

* add test scenario

* fix test_json_response_format

* fixes base llm testing

* fix base llm testing

* fix test image url
2024-12-13 19:16:28 -08:00
dependabot[bot]
1b6b47b16d build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui/litellm-dashboard (#7216)
Bumps [nanoid](https://github.com/ai/nanoid) from 3.3.7 to 3.3.8.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.3.7...3.3.8)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-13 19:16:05 -08:00
Ishaan Jaff
bc46916bb3 (feat - Router / Proxy ) Allow setting budget limits per LLM deployment (#7220)
* fix test_deployment_budget_limits_e2e_test

* refactor async_log_success_event to track spend for provider + deployment

* fix format

* rename class to RouterBudgetLimiting

* rename func

* rename types used for budgets

* add new types for deployment budgets

* add budget limits for deployments

* fix checking budgets set for provider

* update file names

* fix linting error

* _track_provider_remaining_budget_prometheus

* async_filter_deployments

* fix model list passed to router

* update error

* test_deployment_budgets_e2e_test_expect_to_fail

* fix test case

* run deployment budget limits
2024-12-13 19:15:51 -08:00
Krish Dholakia
c3f637012b Litellm dev 12 13 2024 p1 (#7219)
* fix(litellm_logging.py): pass user metadata to langsmith on sdk calls

* fix(litellm_logging.py): pass nested user metadata to logging integration - e.g. langsmith

* fix(exception_mapping_utils.py): catch and clarify watsonx `/text/chat` endpoint not supported error message.

Closes https://github.com/BerriAI/litellm/issues/7213

* fix(watsonx/common_utils.py): accept new 'WATSONX_IAM_URL' env var

allows user to use local watsonx

Fixes https://github.com/BerriAI/litellm/issues/4991

* fix(litellm_logging.py): cleanup unused function

* test: skip bad ibm test
2024-12-13 19:01:28 -08:00
Krrish Dholakia
19881a5bf7 bump: version 1.55.1 → 1.55.2 2024-12-13 12:55:56 -08:00
Krish Dholakia
550677e63d Litellm dev 12 11 2024 v2 (#7215)
* feat(bedrock/): add bedrock converse top k param

Closes https://github.com/BerriAI/litellm/issues/7087

* Fix bedrock empty content error (#7177)

* add resolver

* handle empty content on bedrock with default content

* use existing default message, tests

* Update tests/llm_translation/test_bedrock_completion.py

* fix tests

* Revert "add resolver"

This reverts commit c717e376ee.

* fallback to empty

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(factory.py): handle empty content blocks in messages

Fixes https://github.com/BerriAI/litellm/issues/7169

* feat(router.py): add stripped model check to model fallback search

if model_name="openai/gpt-3.5-turbo" and fallback=[{"gpt-3.5-turbo"..}] the fallback should just work as expected

* fix: fix linting error

* fix(factory.py): fix linting error

* fix(factory.py): in base case still support skip empty text blocks

---------

Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2024-12-13 12:49:57 -08:00
Krish Dholakia
a42f008cd0 Litellm dev 12 12 2024 (#7203)
* fix(azure/): support passing headers to azure openai endpoints

Fixes https://github.com/BerriAI/litellm/issues/6217

* fix(utils.py): move default tokenizer to just openai

hf tokenizer makes network calls when trying to get the tokenizer - this slows down execution time calls

* fix(router.py): fix pattern matching router - add generic "*" to it as well

Fixes issue where generic "*" model access group wouldn't show up

* fix(pattern_match_deployments.py): match to more specific pattern

match to more specific pattern

allows setting generic wildcard model access group and excluding specific models more easily

* fix(proxy_server.py): fix _delete_deployment to handle base case where db_model list is empty

don't delete all router models  b/c of empty list

Fixes https://github.com/BerriAI/litellm/issues/7196

* fix(anthropic/): fix handling response_format for anthropic messages with anthropic api

* fix(fireworks_ai/): support passing response_format + tool call in same message

Addresses https://github.com/BerriAI/litellm/issues/7135

* Revert "fix(fireworks_ai/): support passing response_format + tool call in same message"

This reverts commit 6a30dc6929.

* test: fix test

* fix(replicate/): fix replicate default retry/polling logic

* test: add unit testing for router pattern matching

* test: update test to use default oai tokenizer

* test: mark flaky test

* test: skip flaky test
2024-12-13 08:54:03 -08:00
Ishaan Jaff
e65f990319 bump: version 1.55.0 → 1.55.1 2024-12-12 20:50:45 -08:00
Ishaan Jaff
b56e29db36 (fix) latency fix - revert prompt caching check on litellm router (#7211)
* attempt to fix latency issue

* fix latency issues for router prompt caching
2024-12-12 20:50:16 -08:00
Ishaan Jaff
01b20f0bb8 (minor fix proxy) Clarify Proxy Rate limit errors are showing hash of litellm virtual key (#7210)
* fix clarify rate limit errors are showing litellm virtual key

* fix constants.py

* update test

* fix test parallel limiter
2024-12-12 20:13:14 -08:00
Ishaan Jaff
a0464f2970 fix testing retry audio test 3 times 2024-12-12 20:09:14 -08:00
Ishaan Jaff
b1c3e2d4ef (feat) UI - Disable Usage Tab once SpendLogs is 1M+ Rows (#7208)
* use utils to set proxy spend logs row count

* store proxy state variables

* fix check for _has_user_setup_sso

* fix proxyStateVariables

* fix dup code

* rename getProxyUISettings

* add fixes

* ui emit num spend logs rows

* test_proxy_server_prisma_setup

* use MAX_SPENDLOG_ROWS_TO_QUERY to constants

* test_get_ui_settings_spend_logs_threshold
2024-12-12 18:43:17 -08:00
Ishaan Jaff
8c7605a164 fix: Support WebP image format and avoid token calculation error (#7182)
* fix get_image_dimensions

* attempt without pillow

* add clear type hints

* fix run_async_function_within_sync_function

* fix calculage_img_tokens

* fix is_prompt_caching_valid_prompt

* fix naming

* fix calculate_img_tokens

* fix unused imports

* fix calculate_img_tokens

* test test_is_prompt_caching_enabled_error_handling

* test_is_prompt_caching_enabled_return_default_image_dimensions

* fix openai_token_counter

* fix get_image_dimensions

* test_token_counter_with_image_url_with_detail_high

* test_img_url_token_counter

* fix test utils

* fix testing

* test_is_prompt_caching_enabled
2024-12-12 14:32:39 -08:00
Ishaan Jaff
c6d6bda76c (docs) Document StandardLoggingPayload Spec (#7201)
* add slp spec to docs

* docs slp

* test slp enforcement
2024-12-12 14:00:42 -08:00
Ishaan Jaff
0862a233be (feat) add error_code, error_class, llm_provider to StandardLoggingPayload (#7200)
* add StandardLoggingPayloadErrorInformation to error

* test_get_error_information
2024-12-12 12:18:10 -08:00
Ishaan Jaff
02fc8d8738 (Feat) DataDog Logger - Add HOSTNAME and POD_NAME to DataDog logs (#7189)
* add unit test for test_datadog_static_methods

* docs dd vars

* test_datadog_payload_environment_variables

* test_datadog_static_methods

* docs env vars

* fix table
2024-12-12 12:06:26 -08:00
dependabot[bot]
e14ce4a0d3 build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui (#7198)
Bumps [nanoid](https://github.com/ai/nanoid) from 3.3.7 to 3.3.8.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.3.7...3.3.8)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-12 12:04:54 -08:00
Ishaan Jaff
2185587b4d (feat) add response_time to StandardLoggingPayload - logged on datadog, gcs_bucket, s3_bucket etc (#7199)
* feat - add response_time to slp

* test_get_response_time

* docs slp

* fix test_datadog_logging_http_request
2024-12-12 12:04:43 -08:00
Krrish Dholakia
04138c2df7 test: update hf test to check if client closed 2024-12-12 11:34:50 -08:00
Ishaan Jaff
f1fa2d3fef ci/cd run release pipeline 2024-12-12 10:48:47 -08:00
Ishaan Jaff
bd8f39419d fix hf failing streaming test 2024-12-12 10:48:00 -08:00
Ishaan Jaff
2b9d2417ca bump: version 1.54.1 → 1.55.0 2024-12-12 10:39:04 -08:00
Krish Dholakia
a9aeb21d0b fix(acompletion): support fallbacks on acompletion (#7184)
* fix(acompletion): support fallbacks on acompletion

allows health checks for wildcard routes to use fallback models

* test: update cohere generate api testing

* add max tokens to health check (#7000)

* fix: fix health check test

* test: update testing

---------

Co-authored-by: Cameron <561860+wallies@users.noreply.github.com>
2024-12-11 19:20:54 -08:00
Krrish Dholakia
5fe77499d2 build(model_prices_and_context_window.json): add new dbrx llama 3.3 model
fixes llama cost calc on databricks
2024-12-11 13:01:22 -08:00
Ishaan Jaff
74917d7b16 fix test_vertexai_model_garden_model_completion 2024-12-11 12:07:32 -08:00
Krish Dholakia
c466f494f2 fix(get_supported_openai_params.py): cleanup (#7176) 2024-12-11 01:15:53 -08:00
Ishaan Jaff
dfba7e7481 fix merge conflicts 2024-12-11 01:11:53 -08:00
Krrish Dholakia
982ef7ca04 build: Squashed commit of https://github.com/BerriAI/litellm/pull/7171
Closes https://github.com/BerriAI/litellm/pull/7171
2024-12-11 01:10:12 -08:00