Commit graph

3423 commits

Author SHA1 Message Date
Krish Dholakia
d43d83f9ef
feat(router.py): support request prioritization for text completion c… (#7540)
* feat(router.py): support request prioritization for text completion calls

* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`

Fixes https://github.com/BerriAI/litellm/issues/7485

* fix: fix linting errors

* fix: fix linting error

* test(test_router_helper_utils.py): add direct test for '_schedule_factory'

Fixes code qa test
2025-01-03 19:35:44 -08:00
Krish Dholakia
f770dd0c95
Support checking provider-specific /models endpoints for available models based on key (#7538)
* test(test_utils.py): initial test for valid models

Addresses https://github.com/BerriAI/litellm/issues/7525

* fix: test

* feat(fireworks_ai/transformation.py): support retrieving valid models from fireworks ai endpoint

* refactor(fireworks_ai/): support checking model info on `/v1/models` route

* docs(set_keys.md): update docs to clarify check llm provider api usage

* fix(watsonx/common_utils.py): support 'WATSONX_ZENAPIKEY' for iam auth

* fix(watsonx): read in watsonx token from env var

* fix: fix linting errors

* fix(utils.py): fix provider config check

* style: cleanup unused imports
2025-01-03 19:29:59 -08:00
Ishaan Jaff
1bb4941036
[Feature]: - allow print alert log to console (#7534)
* update send_to_webhook

* test_print_alerting_payload_warning

* add alerting_args spec

* test_alerting.py
2025-01-03 17:48:13 -08:00
Ishaan Jaff
fb59f20979
(Feat) - Hashicorp secret manager, use TLS cert authentication (#7532)
* fix - don't print hcorp secrets in debug logs

* hcorp - tls auth fixes

* fix tls_ca_cert_path

* test_hashicorp_secret_manager_tls_cert_auth

* hcp secret docs
2025-01-03 14:23:53 -08:00
Ishaan Jaff
d3a3e45e5b docs pass through routes
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2025-01-03 12:55:23 -08:00
Krish Dholakia
25e6f46910
Litellm dev 01 02 2025 p2 (#7512)
* feat(deepgram/transformation.py): support reading in deepgram api base from env var

* fix(litellm_logging.py): make skipping log message a .info

easier to see

* docs(logging.md): add doc on turn off all tracking/logging for a request
2025-01-02 21:57:51 -08:00
Ishaan Jaff
b9280528d3 docs enable_pre_call_checks
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2025-01-02 08:27:03 -08:00
Krrish Dholakia
c292f5805a docs(humanloop.md): add humanloop docs
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2025-01-01 22:18:01 -08:00
Krish Dholakia
07fc394072
Litellm dev 01 01 2025 p1 (#7498)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 11s
* refactor(prometheus.py): refactor to remove `_tag` metrics and incorporate in regular metrics

* fix(prometheus.py): handle label values not set in enum values

* feat(prometheus.py): working e2e custom metadata labels

* docs(prometheus.md): update docs to clarify how custom metrics would work

* test(test_prometheus_unit_tests.py): fix test

* test: add unit testing
2025-01-01 18:59:28 -08:00
Ishaan Jaff
665fb59f48 doc update 2025-01-01 18:40:59 -08:00
Ishaan Jaff
cf60444916
(Feat) Add support for reading secrets from Hashicorp vault (#7497)
* HashicorpSecretManager

* test_hashicorp_secret_managerv

* use 1 helper initialize_secret_manager

* add HASHICORP_VAULT

* working config

* hcorp read_secret

* HashicorpSecretManager

* add secret_manager_testing

* use 1 folder for secret manager testing

* test_hashicorp_secret_manager_get_secret

* HashicorpSecretManager

* docs HCP secrets

* update folder name

* docs hcorp secret manager

* remove unused imports

* add conftest.py

* fix tests

* docs document env vars
2025-01-01 18:35:05 -08:00
Ishaan Jaff
e1fcd3ee43
(docs) Add docs on load testing benchmarks (#7499)
* docs benchmarks

* docs benchmarks
2025-01-01 18:33:20 -08:00
Ishaan Jaff
38bfefa6ef
(Feat) - LiteLLM Use UsernamePasswordCredential for Azure OpenAI (#7496)
* add get_azure_ad_token_from_username_password

* docs azure use username / password for auth

* update doc

* get_azure_ad_token_from_username_password

* test test_get_azure_ad_token_from_username_password
2025-01-01 14:11:27 -08:00
Ishaan Jaff
2979b8301c
(feat) POST /fine_tuning/jobs support passing vertex specific hyper params (#7490)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* update convert_openai_request_to_vertex

* test_create_vertex_fine_tune_jobs_mocked

* fix order of methods

* update LiteLLMFineTuningJobCreate

* update OpenAIFineTuningHyperparameters

* update vertex hyper params in response

* _transform_openai_hyperparameters_to_vertex_hyperparameters

* supervised_tuning_spec["hyperParameters"] fix

* fix mapping for ft params testing

* docs fine tuning apis

* fix test_convert_basic_openai_request_to_vertex_request

* update hyperparams for create fine tuning

* fix linting

* test_create_vertex_fine_tune_jobs_mocked_with_hyperparameters

* run ci/cd again

* test_convert_basic_openai_request_to_vertex_request
2025-01-01 07:44:48 -08:00
Krish Dholakia
d984a9281a
Prometheus - custom metrics support + other improvements (#7489)
* fix(prometheus.py): refactor litellm_input_tokens_metric to use label factory

makes adding new metrics easier

* feat(prometheus.py): add 'request_model' to 'litellm_input_tokens_metric'

* refactor(prometheus.py): refactor 'litellm_output_tokens_metric' to use label factory

makes adding new metrics easier

* feat(prometheus.py): emit requested model in 'litellm_output_tokens_metric'

* feat(prometheus.py): support tracking success events with custom metrics

* refactor(prometheus.py): refactor '_set_latency_metrics' to just use the initially created enum values dictionary

reduces scope for missing values

* feat(prometheus.py): refactor all tags to support custom metadata tags

enables metadata tags to be used across for e2e tracking

* fix(prometheus.py): fix requested model on success event enum_values

* test: fix test

* test: fix test

* test: handle filenotfound error

* docs(prometheus.md): add new values to prometheus

* docs(prometheus.md): document adding custom metrics on prometheus

* bump: version 1.56.5 → 1.56.6
2025-01-01 07:41:50 -08:00
Ishaan Jaff
03b1db5a7d
(Feat) - Add PagerDuty Alerting Integration (#7478)
* define basic types

* fix verbose_logger.exception statement

* fix basic alerting

* test pager duty alerting

* test_pagerduty_alerting_high_failure_rate

* PagerDutyAlerting

* async_log_failure_event

* use pre_call_hook

* add _request_is_completed helper util

* update AlertingConfig

* rename PagerDutyInternalEvent

* _send_alert_if_thresholds_crossed

* use pagerduty as _custom_logger_compatible_callbacks_literal

* fix slack alerting imports

* fix imports in slack alerting

* PagerDutyAlerting

* fix _load_alerting_settings

* test_pagerduty_hanging_request_alerting

* working pager duty alerting

* fix linting

* doc pager duty alerting

* update hanging_response_handler

* fix import location

* update failure_threshold

* update async_pre_call_hook

* docs pagerduty

* test - callback_class_str_to_classType

* fix linting errors

* fix linting + testing error

* PagerDutyAlerting

* test_pagerduty_hanging_request_alerting

* fix unused imports

* docs pager duty

* @pytest.mark.flaky(retries=6, delay=2)

* test_model_info_bedrock_converse_enforcement
2025-01-01 07:12:51 -08:00
Daniel Ko
01a108cf82
Added missing quote (#7481) 2024-12-31 23:23:49 -08:00
Krish Dholakia
080de89cfb
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493)
* fix(langfuse_prompt_management.py): migrate dynamic logging to langfuse custom logger compatible class

* fix(langfuse_prompt_management.py): support failure callback logging to langfuse as well

* feat(proxy_server.py): support setting custom tokenizer on config.yaml

Allows customizing value for `/utils/token_counter`

* fix(proxy_server.py): fix linting errors

* test: skip if file not found

* style: cleanup unused import

* docs(configs.md): add docs on setting custom tokenizer
2024-12-31 23:18:41 -08:00
Ishaan Jaff
6705e30d5d
(docs) Add docs on using Vertex with Fine Tuning APIs (#7491)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* docs add Overview for vertex endpoints

* docs add vertex ft api to docs

* Advanced use case - Passing `adapter_size` to the Vertex AI API
2024-12-31 18:50:18 -08:00
Ishaan Jaff
60bdfb437f doc on streaming usage litellm proxy 2024-12-30 21:06:34 -08:00
Ishaan Jaff
24dd6559a6 localeCompare
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 35s
2024-12-28 20:32:49 -08:00
Krrish Dholakia
192c3b2848 docs(index.md): fix doc link 2024-12-28 20:28:50 -08:00
Krish Dholakia
31ace870a2
Litellm dev 12 28 2024 p1 (#7463)
* refactor(utils.py): migrate amazon titan config to base config

* refactor(utils.py): refactor bedrock meta invoke model translation to use base config

* refactor(utils.py): move bedrock ai21 to base config

* refactor(utils.py): move bedrock cohere to base config

* refactor(utils.py): move bedrock mistral to use base config

* refactor(utils.py): move all provider optional param translations to using a config

* docs(clientside_auth.md): clarify how to pass vertex region to litellm proxy

* fix(utils.py): handle scenario where custom llm provider is none / empty

* fix: fix get config

* test(test_otel_load_tests.py): widen perf margin

* fix(utils.py): fix get provider config check to handle custom llm's

* fix(utils.py): fix check
2024-12-28 20:26:00 -08:00
Krrish Dholakia
ec7fcc982d docs(index.md): add deepgram to release notes 2024-12-28 20:24:55 -08:00
Krrish Dholakia
bbf46913fa docs(deepgram.md): add table clarifying supported openai endpoint 2024-12-28 20:21:31 -08:00
Ishaan Jaff
cd59e62b40 doc update order 2024-12-28 20:20:12 -08:00
Krrish Dholakia
e18143dcba docs(deepgram.md): add deepgram model support to docs 2024-12-28 20:19:12 -08:00
Ishaan Jaff
8c569899c0 update release note 2024-12-28 20:15:30 -08:00
Ishaan Jaff
ea8f0913c2 test_e2e_batches_files 2024-12-28 19:54:04 -08:00
Krrish Dholakia
24a3403655 docs(spending_monitoring.md): add section on disabling spend logs to db 2024-12-28 19:48:50 -08:00
Ishaan Jaff
32e8bdef6f update clean up jobs 2024-12-28 19:45:19 -08:00
Krrish Dholakia
ab665dc7af docs(spend_monitoring.md): cleanup doc 2024-12-28 19:42:03 -08:00
Krish Dholakia
cfb6890b9f
Litellm dev 12 28 2024 p2 (#7458)
* docs(sidebar.js): docs for support model access groups for wildcard routes

* feat(key_management_endpoints.py): add check if user is premium_user when adding model access group for wildcard route

* refactor(docs/): make control model access a root-level doc in proxy sidebar

easier to discover how to control model access on litellm

* docs: more cleanup

* feat(fireworks_ai/): add document inlining support

Enables user to call non-vision models with images/pdfs/etc.

* test(test_fireworks_ai_translation.py): add unit testing for fireworks ai transform inline helper util

* docs(docs/): add document inlining details to fireworks ai docs

* feat(fireworks_ai/): allow user to dynamically disable auto add transform inline

allows client-side disabling of this feature for proxy users

* feat(fireworks_ai/): return 'supports_vision' and 'supports_pdf_input' true on all fireworks ai models

now true as fireworks ai supports document inlining

* test: fix tests

* fix(router.py): add unit testing for _is_model_access_group_for_wildcard_route
2024-12-28 19:38:06 -08:00
Ishaan Jaff
4e65722a00
(Bug Fix) Add health check support for realtime models (#7453)
* add mode: realtime

* add _realtime_health_check

* test_realtime_health_check

* azure _realtime_health_check

* _realtime_health_check

* Realtime Models

* fix code quality
2024-12-28 18:15:00 -08:00
Ishaan Jaff
49fa6515c0
docs spend monitoring (#7461) 2024-12-28 16:39:24 -08:00
Ishaan Jaff
8610c7bf93 docs release notes
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 41s
2024-12-27 21:41:21 -08:00
Ishaan Jaff
a962d88822 add keywords 2024-12-27 21:39:46 -08:00
Ishaan Jaff
570ab5498e v1.56.3 release notes 2024-12-27 21:36:49 -08:00
Ishaan Jaff
79c783e83f docs guardrails 2024-12-27 16:31:03 -08:00
Ishaan Jaff
0b4ef57172 docs add guardrail spec 2024-12-27 15:47:43 -08:00
Ishaan Jaff
b53861f3fb docs update gemini/ link 2024-12-27 15:32:50 -08:00
Igor Ribeiro Lima
11932d0576
Add Gemini embedding doc (#7436) 2024-12-27 15:27:26 -08:00
Ishaan Jaff
62753eea69
(Feat) Log Guardrails run, guardrail response on logging integrations (#7445)
* add guardrail_information to SLP

* use standard_logging_guardrail_information

* track StandardLoggingGuardrailInformation

* use log_guardrail_information

* use log_guardrail_information

* docs guardrails

* docs guardrails

* update quick start

* fix presidio logging for sync functions

* update Guardrail type

* enforce add_standard_logging_guardrail_information_to_request_data

* update gd docs
2024-12-27 15:01:56 -08:00
Krrish Dholakia
37f998171b docs(index.md): new release notes
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 39s
2024-12-26 22:01:29 -08:00
Krish Dholakia
9d82ff4793
Litellm dev 12 26 2024 p3 (#7434)
* build(model_prices_and_context_window.json): update groq models to specify 'supports_vision' parameter

Closes https://github.com/BerriAI/litellm/issues/7433

* docs(groq.md): add groq vision example to docs

Closes https://github.com/BerriAI/litellm/issues/7433

* fix(prometheus.py): refactor self.litellm_proxy_failed_requests_metric to use label factory

* feat(prometheus.py): new 'litellm_proxy_failed_requests_by_tag_metric'

allows tracking failed requests by tag on proxy

* fix(prometheus.py): fix exception logging

* feat(prometheus.py): add new 'litellm_request_total_latency_by_tag_metric'

enables tracking latency by use-case

* feat(prometheus.py): add new llm api latency by tag metric

* feat(prometheus.py): new litellm_deployment_latency_per_output_token_by_tag metric

allows tracking deployment latency by tag

* fix(prometheus.py): refactor 'litellm_requests_metric' to use enum values + label factory

* feat(prometheus.py): new litellm_proxy_total_requests_by_tag metric

allows tracking total requests by tag

* feat(prometheus.py): new metric litellm_deployment_successful_fallbacks_by_tag

allows tracking deployment fallbacks by tag

* fix(prometheus.py): new 'litellm_deployment_failed_fallbacks_by_tag' metric

allows tracking failed fallbacks on deployment by custom tag

* test: fix test

* test: rename test to run earlier

* test: skip flaky test
2024-12-26 21:21:16 -08:00
Krish Dholakia
539f166166
Support budget/rate limit tiers for keys (#7429)
* feat(proxy/utils.py): get associated litellm budget from db in combined_view for key

allows user to create rate limit tiers and associate those to keys

* feat(proxy/_types.py): update the value of key-level tpm/rpm/model max budget metrics with the associated budget table values if set

allows rate limit tiers to be easily applied to keys

* docs(rate_limit_tiers.md): add doc on setting rate limit / budget tiers

make feature discoverable

* feat(key_management_endpoints.py): return litellm_budget_table value in key generate

make it easy for user to know associated budget on key creation

* fix(key_management_endpoints.py): document 'budget_id' param in `/key/generate`

* docs(key_management_endpoints.py): document budget_id usage

* refactor(budget_management_endpoints.py): refactor budget endpoints into separate file - makes it easier to run documentation testing against it

* docs(test_api_docs.py): add budget endpoints to ci/cd doc test + add missing param info to docs

* fix(customer_endpoints.py): use new pydantic obj name

* docs(user_management_heirarchy.md): add simple doc explaining teams/keys/org/users on litellm

* Litellm dev 12 26 2024 p2 (#7432)

* (Feat) Add logging for `POST v1/fine_tuning/jobs`  (#7426)

* init commit ft jobs logging

* add ft logging

* add logging for FineTuningJob

* simple FT Job create test

* (docs) - show all supported Azure OpenAI endpoints in overview  (#7428)

* azure batches

* update doc

* docs azure endpoints

* docs endpoints on azure

* docs azure batches api

* docs azure batches api

* fix(key_management_endpoints.py): fix key update to actually work

* test(test_key_management.py): add e2e test asserting ui key update call works

* fix: proxy/_types - fix linting erros

* test: update test

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* fix: test

* fix(parallel_request_limiter.py): enforce tpm/rpm limits on key from tiers

* fix: fix linting errors

* test: fix test

* fix: remove unused import

* test: update test

* docs(customer_endpoints.py): document new model_max_budget param

* test: specify unique key alias

* docs(budget_management_endpoints.py): document new model_max_budget param

* test: fix test

* test: fix tests

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-12-26 19:05:27 -08:00
Ishaan Jaff
12c4e7e695
docs guardrail params (#7430)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
2024-12-26 11:08:47 -08:00
Ishaan Jaff
c1d6c35aef
(docs) - show all supported Azure OpenAI endpoints in overview (#7428)
* azure batches

* update doc

* docs azure endpoints

* docs endpoints on azure

* docs azure batches api

* docs azure batches api
2024-12-26 09:01:41 -08:00
Krrish Dholakia
2dcde8ce2b docs: cleanup doc
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2024-12-25 21:23:42 -08:00
Krrish Dholakia
506b6f6517 docs(fireworks_ai.md): add audio transcription to fireworks ai doc 2024-12-25 21:22:51 -08:00