Commit graph

1610 commits

Author SHA1 Message Date
Krish Dholakia
c4780479a9
Litellm dev 01 10 2025 p2 (#7679)
* test(test_basic_python_version.py): assert all optional dependencies are marked as extras on poetry

Fixes https://github.com/BerriAI/litellm/issues/7677

* docs(secret.md): clarify 'read_and_write' secret manager usage on aws

* docs(secret.md): fix doc

* build(ui/teams.tsx): add edit/delete button for updating user / team membership on ui

allows updating user role to admin on ui

* build(ui/teams.tsx): display edit member component on ui, when edit button on member clicked

* feat(team_endpoints.py): support updating team member role to admin via api endpoints

allows team member to become admin post-add

* build(ui/user_dashboard.tsx): if team admin - show all team keys

Fixes https://github.com/BerriAI/litellm/issues/7650

* test(config.yml): add tomli to ci/cd

* test: don't call python_basic_testing in local testing (covered by python 3.13 testing)
2025-01-10 21:50:53 -08:00
Ishaan Jaff
02f5c44a35
[Bug fix]: Proxy Auth Layer - Allow Azure Realtime routes as llm_api_routes (#7684)
* fix route check azure realtime endpoints

* test_is_llm_api_route

* fix /realtime

* test_routes_on_litellm_proxy
2025-01-10 20:38:06 -08:00
Ishaan Jaff
5c870c0c51
(performance improvement - litellm sdk + proxy) - ensure litellm does not create unnecessary threads when running async functions (#7680)
* fix handle_sync_success_callbacks_for_async_calls

* fix handle_sync_success_callbacks_for_async_calls

* fix linting / testing errors

* use handle_sync_success_callbacks_for_async_calls

* add unit testing for logging fixes
2025-01-10 17:57:22 -08:00
Krish Dholakia
a3e65c9bcb
LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 (#7670)
* test(test_get_model_info.py): add unit test confirming router deployment updates global 'get_model_info'

* fix(get_supported_openai_params.py): fix custom llm provider 'get_supported_openai_params'

Fixes https://github.com/BerriAI/litellm/issues/7668

* docs(azure.md): clarify how azure ad token refresh on proxy works

Closes https://github.com/BerriAI/litellm/issues/7665
2025-01-10 17:49:05 -08:00
Krish Dholakia
c10ae8879e
fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini p… (#7660)
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url

* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic

deduplicates code

* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages

* docs(prompt_management.md): update prompt management to be in beta

given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)

* refactor(prompt_management_base.py): introduce base class for prompt management

allows consistent behaviour across prompt management integrations

* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base

* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set

allows tracking what prompt was used for what purpose

* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse

allows logging prompt id / prompt variables to langfuse

* test: fix test

* fix(router.py): cleanup unused imports

* fix: fix linting error

* fix: fix trace param typing

* fix: fix linting errors

* fix: fix code qa check
2025-01-10 07:31:59 -08:00
Krish Dholakia
865e6d5bda
fix(main.py): fix lm_studio/ embedding routing (#7658)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* fix(main.py): fix lm_studio/ embedding routing

adds the mapping + updates docs with example

* docs(self_serve.md): update doc to show how to auto-add sso users to teams

* fix(streaming_handler.py): simplify async iterator check, to just check if streaming response is an async iterable
2025-01-09 23:03:24 -08:00
Ishaan Jaff
13f364682d
(Feat - Batches API) add support for retrieving vertex api batch jobs (#7661)
* add _async_retrieve_batch

* fix aretrieve_batch

* fix _get_batch_id_from_vertex_ai_batch_response

* fix batches docs
2025-01-09 18:35:03 -08:00
Ishaan Jaff
2507c275f6
(proxy perf improvement) - use uvloop for higher RPS (10%-20% higher RPS) (#7662)
* uvicorn use uvloop

* fix uvloop==0.21.0

* add uvloop to pyproject

* test_completion_response_ratelimit_headers
2025-01-09 18:11:20 -08:00
Ishaan Jaff
a85de46ef7
(proxy - RPS) - Get 2K RPS at 4 instances, minor fix aiohttp_openai/ (#7659)
* speed up transform_response

* use 2 workers

* undo changes to uvicorn

* ci/cd run again
2025-01-09 17:24:18 -08:00
Krish Dholakia
907bcd3a62
Litellm dev 01 08 2025 p1 (#7640)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* feat(ui_sso.py): support reading team ids from sso token

* feat(ui_sso.py): working upsert sso user teams membership in litellm - if team exists

Adds user to relevant teams, if user is part of teams and team exists on litellm

* fix(ui_sso.py): safely handle add team member task

* build(ui/): support setting team id when creating team on UI

* build(ui/): teams.tsx

allow setting team id on ui

* build(circle_ci/requirements.txt): add fastapi-sso to ci/cd testing

* fix: fix linting errors
2025-01-08 22:08:20 -08:00
Krish Dholakia
1e3370f3cb
LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 (#7643)
* fix(streaming_chunk_builder_utils.py): add test for groq tool calling + streaming + combine chunks

Addresses https://github.com/BerriAI/litellm/issues/7621

* fix(streaming_utils.py): fix modelresponseiterator for openai like chunk parser

ensures chunk parser uses the correct tool call id when translating the chunk

 Fixes https://github.com/BerriAI/litellm/issues/7621

* build(model_hub.tsx): display cost pricing on model hub

* build(model_hub.tsx): show cost per token pricing + complete model information

* fix(types/utils.py): fix usage object handling
2025-01-08 19:45:19 -08:00
Ishaan Jaff
48d4f79206
fix is llm api route check (#7631) 2025-01-08 18:45:59 -08:00
Krish Dholakia
4af23353d6
Allow assigning teams to org on UI + OpenAI omni-moderation cost model tracking (#7566)
* feat(cost_calculator.py): add cost tracking ($0) for openai moderations endpoint

removes sentry cost tracking errors caused by this

* build(teams.tsx): allow assigning teams to orgs
2025-01-08 16:58:21 -08:00
Krish Dholakia
0ffc5379ea
Litellm dev 01 07 2025 p2 (#7622)
* build(ui/): update ui

* fix: drop unsupported non-whitespace characters for real when calling… (#7484)

* fix: drop unsupported non-whitespace characters for real when calling anthropic with stop sequences

* test: add parameterized test for _map_stop_sequences method in AnthropicConfig

---------

Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
2025-01-08 16:56:39 -08:00
Krish Dholakia
a187cee538
Litellm dev 01 07 2025 p3 (#7635)
* fix(__init__.py): fix mistral large tool calling

map bedrock mistral large to converse endpoint

Fixes https://github.com/BerriAI/litellm/issues/7521

* braintrust logging: respect project_id, add more metrics + more (#7613)

* braintrust logging: respect project_id, add more metrics

* braintrust logger: improve json formatting

* braintrust logger: add test for passing specific project_id

* rm unneeded import

* braintrust logging: rm unneeded var in tets

* add project_name

* update docs

---------

Co-authored-by: H <no@email.com>

---------

Co-authored-by: hi019 <65871571+hi019@users.noreply.github.com>
Co-authored-by: H <no@email.com>
2025-01-08 11:46:24 -08:00
Krish Dholakia
e8ed40a27b
Litellm dev 01 01 2025 p2 (#7615)
* fix(utils.py): prevent double logging when passing 'fallbacks=' to .completion()

Fixes https://github.com/BerriAI/litellm/issues/7477

* fix(utils.py): fix vertex anthropic check

* fix(utils.py): ensure supported params is always set

Fixes https://github.com/BerriAI/litellm/issues/7470

* test(test_optional_params.py): add unit testing to prevent mistranslation

Fixes https://github.com/BerriAI/litellm/issues/7470

* fix: fix linting error

* test: cleanup
2025-01-07 21:40:33 -08:00
Ishaan Jaff
081826a5d6
(Feat) soft budget alerts on keys (#7623)
* class WebhookEvent(CallInfo):
Add

* handle soft budget alerts

* handle soft budget

* fix budget alerts

* fix CallInfo

* fix _get_user_info_str

* test_soft_budget_alerts

* test_soft_budget_alert
2025-01-07 21:36:34 -08:00
Krish Dholakia
4e69711411
Litellm dev 01 07 2025 p1 (#7618)
* fix(main.py): pass custom llm provider on litellm logging provider update

* fix(cost_calculator.py): don't append provider name to return model if existing llm provider

Fixes https://github.com/BerriAI/litellm/issues/7607

* fix(prometheus_services.py): fix prometheus system health error logging

Fixes https://github.com/BerriAI/litellm/issues/7611
2025-01-07 21:22:31 -08:00
Ishaan Jaff
55139b8fd6 update tests
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
2025-01-06 22:36:00 -08:00
Ishaan Jaff
2ca0977921
aiohttp_openai/ fixes - allow using aiohttp_openai/gpt-4o (#7598)
* fixes for get_complete_url

* update aiohttp tests

* fix event loop for aiohtto

* ci/cd run again

* test_aiohttp_openai
2025-01-06 21:39:11 -08:00
Ishaan Jaff
744beac754 ci/cd run again 2025-01-06 21:35:34 -08:00
Krish Dholakia
fef7839e8a
Litellm dev 01 06 2025 p1 (#7594)
* fix(custom_logger.py): expose new 'async_get_chat_completion_prompt' event hook

* fix(custom_logger.py): langfuse_prompt_management.py

remove 'headers' from custom logger 'async_get_chat_completion_prompt' and 'get_chat_completion_prompt' event hooks

* feat(router.py): expose new function for prompt management based routing

* feat(router.py): partial working router prompt factory logic

allows load balanced model to be used for model name w/ langfuse prompt management call

* feat(router.py): fix prompt management with load balanced model group

* feat(langfuse_prompt_management.py): support reading in openai params from langfuse

enables user to define optional params on langfuse vs. client code

* test(test_Router.py): add unit test for router based langfuse prompt management

* fix: fix linting errors
2025-01-06 21:26:21 -08:00
Krish Dholakia
0c3fef24cd
Litellm dev 01 06 2025 p2 (#7597)
* test(test_amazing_vertex_completion.py): fix test

* test: initial working code gecko test

* fix(vertex_ai_non_gemini.py): support vertex ai code gecko fake streaming

Fixes https://github.com/BerriAI/litellm/issues/7360

* test(test_get_model_info.py): add test for getting custom provider model info

Covers https://github.com/BerriAI/litellm/issues/7575

* fix(utils.py): fix get_provider_model_info check

Handle custom llm provider scenario

Fixes https://github.com/
BerriAI/litellm/issues/7575
2025-01-06 21:04:49 -08:00
Krish Dholakia
b397dc1497
Litellm dev 01 06 2025 p3 (#7596)
* build(model_prices_and_context_window.json): add gemini-1.5-pro 'supports_vision' = true

Fixes https://github.com/BerriAI/litellm/issues/7592

* build(model_prices_and_context_window.json): add new mistral models pricing + model info
2025-01-06 20:44:04 -08:00
Ishaan Jaff
0b5c1392f7
fix _return_user_api_key_auth_obj (#7591) 2025-01-06 16:43:14 -08:00
Krrish Dholakia
23685e93f3 test: skip tests pending vertex credentials
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-05 15:29:51 -08:00
Ishaan Jaff
a40baec5ed use latest bucket for testing 2025-01-05 14:55:48 -08:00
Krrish Dholakia
8ae2ca4ed9 test: fix test 2025-01-05 14:52:38 -08:00
Krrish Dholakia
8bda3006fa fix: test 2025-01-05 14:37:17 -08:00
Krrish Dholakia
32538f09fc test: cleanup test 2025-01-05 14:18:29 -08:00
Ishaan Jaff
3110bb0723 use pathrise-convert-1606954137718 2025-01-05 14:14:43 -08:00
Ishaan Jaff
616211daee ci/cd run again 2025-01-05 14:11:27 -08:00
Ishaan Jaff
137879ffea vertex testing use pathrise-convert-1606954137718 2025-01-05 14:00:17 -08:00
Krrish Dholakia
c0e4485fe0 test: update test amazing vertex
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2025-01-05 13:56:31 -08:00
Ishaan Jaff
ef8812d150 ci/cd update vertex acct 2025-01-05 13:43:32 -08:00
Krish Dholakia
ce97e7e054
fix(groq/chat/transformation.py): fix groq response_format transformation (#7565)
Fixes https://github.com/BerriAI/litellm/issues/4804
2025-01-04 19:39:04 -08:00
Ishaan Jaff
46d9d29bff
(Feat) Hashicorp Secret Manager - Allow storing virtual keys in secret manager (#7549)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* use a base abstract class

* async_write_secret for hcorp

* fix hcorp

* async_write_secret for hashicopr secret manager

* store virtual keys in hcorp

* add delete secret

* test_hashicorp_secret_manager_write_secret

* test_hashicorp_secret_manager_delete_secret

* docs Supported Secret Managers

* docs storing keys in hcorp

* docs hcorp

* docs secret managers

* test_key_generate_with_secret_manager_call

* fix unused imports
2025-01-04 11:35:59 -08:00
Ishaan Jaff
d1b101b9d7
(Fix) - Slack Alerting , don't send duplicate spend report when used on multi instance settings (#7546)
* fix send_weekly_spend_report

* test_spend_report_cache
2025-01-04 10:54:35 -08:00
Krish Dholakia
d43d83f9ef
feat(router.py): support request prioritization for text completion c… (#7540)
* feat(router.py): support request prioritization for text completion calls

* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`

Fixes https://github.com/BerriAI/litellm/issues/7485

* fix: fix linting errors

* fix: fix linting error

* test(test_router_helper_utils.py): add direct test for '_schedule_factory'

Fixes code qa test
2025-01-03 19:35:44 -08:00
Krish Dholakia
f770dd0c95
Support checking provider-specific /models endpoints for available models based on key (#7538)
* test(test_utils.py): initial test for valid models

Addresses https://github.com/BerriAI/litellm/issues/7525

* fix: test

* feat(fireworks_ai/transformation.py): support retrieving valid models from fireworks ai endpoint

* refactor(fireworks_ai/): support checking model info on `/v1/models` route

* docs(set_keys.md): update docs to clarify check llm provider api usage

* fix(watsonx/common_utils.py): support 'WATSONX_ZENAPIKEY' for iam auth

* fix(watsonx): read in watsonx token from env var

* fix: fix linting errors

* fix(utils.py): fix provider config check

* style: cleanup unused imports
2025-01-03 19:29:59 -08:00
Ishaan Jaff
716efd5fad
(fix proxy perf) use _read_request_body instead of ast.literal_eval to get better performance (#7545)
* fix ast literal eval

* run ci/cd again
2025-01-03 17:48:32 -08:00
Ishaan Jaff
1bb4941036
[Feature]: - allow print alert log to console (#7534)
* update send_to_webhook

* test_print_alerting_payload_warning

* add alerting_args spec

* test_alerting.py
2025-01-03 17:48:13 -08:00
Ishaan Jaff
23104d9a14 test_aiohttp_openai 2025-01-03 15:12:56 -08:00
Krish Dholakia
33f301ec86
Litellm dev 01 02 2025 p1 (#7516)
* fix(redact_messages.py): fix redact messages for non-model response input to be dictionary

fixes issue with otel logging when message redaction is enabled

* fix(proxy_server.py): fix langfuse key leak in exception string

* test: fix test

* test: fix test

* test: fix tests
2025-01-03 14:40:57 -08:00
Ishaan Jaff
fb59f20979
(Feat) - Hashicorp secret manager, use TLS cert authentication (#7532)
* fix - don't print hcorp secrets in debug logs

* hcorp - tls auth fixes

* fix tls_ca_cert_path

* test_hashicorp_secret_manager_tls_cert_auth

* hcp secret docs
2025-01-03 14:23:53 -08:00
Ishaan Jaff
9fef0a6d16
(fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails (#7519)
* fix async_send_batch for gcs

* fix truncate GCS logger

* test_truncate_standard_logging_payload
2025-01-03 08:09:03 -08:00
Ishaan Jaff
d861aa8ff3
(perf) use aiohttp for custom_openai (#7514)
* use aiohttp handler

* BaseLLMAIOHTTPHandler

* use CustomOpenAIChatConfig

* CustomOpenAIChatConfig

* CustomOpenAIChatConfig

* fix linting

* AiohttpOpenAIChatConfig

* fix order

* aiohttp_openai
2025-01-02 22:15:17 -08:00
Ishaan Jaff
4d93fe787b
Revert "(fix) GCS bucket logger - apply `truncate_standard_logging_payload_co…" (#7515)
This reverts commit 26a37c50c9.
2025-01-02 22:01:02 -08:00
Krish Dholakia
45b93f2721
Litellm dev 01 01 2025 p3 (#7503)
* fix(utils.py): add new validate tool choice helper function

Prevents https://github.com/BerriAI/litellm/issues/7483

* fix(main.py): add tool choice validation on .completion()

prevents user error like - https://github.com/BerriAI/litellm/issues/7483

* fix(utils.py): fix return val of tool choice validation logic
2025-01-01 22:12:15 -08:00
Ishaan Jaff
f96f54a0f5 test_aview_spend_per_user 2025-01-01 21:48:35 -08:00