Commit graph

3010 commits

Author SHA1 Message Date
Ishaan Jaff
0b4d529af8 (feat) POST /fine_tuning/jobs support passing vertex specific hyper params (#7490)
* update convert_openai_request_to_vertex

* test_create_vertex_fine_tune_jobs_mocked

* fix order of methods

* update LiteLLMFineTuningJobCreate

* update OpenAIFineTuningHyperparameters

* update vertex hyper params in response

* _transform_openai_hyperparameters_to_vertex_hyperparameters

* supervised_tuning_spec["hyperParameters"] fix

* fix mapping for ft params testing

* docs fine tuning apis

* fix test_convert_basic_openai_request_to_vertex_request

* update hyperparams for create fine tuning

* fix linting

* test_create_vertex_fine_tune_jobs_mocked_with_hyperparameters

* run ci/cd again

* test_convert_basic_openai_request_to_vertex_request
2025-01-01 07:44:48 -08:00
Krish Dholakia
c46c1e6ea0 Prometheus - custom metrics support + other improvements (#7489)
* fix(prometheus.py): refactor litellm_input_tokens_metric to use label factory

makes adding new metrics easier

* feat(prometheus.py): add 'request_model' to 'litellm_input_tokens_metric'

* refactor(prometheus.py): refactor 'litellm_output_tokens_metric' to use label factory

makes adding new metrics easier

* feat(prometheus.py): emit requested model in 'litellm_output_tokens_metric'

* feat(prometheus.py): support tracking success events with custom metrics

* refactor(prometheus.py): refactor '_set_latency_metrics' to just use the initially created enum values dictionary

reduces scope for missing values

* feat(prometheus.py): refactor all tags to support custom metadata tags

enables metadata tags to be used across for e2e tracking

* fix(prometheus.py): fix requested model on success event enum_values

* test: fix test

* test: fix test

* test: handle filenotfound error

* docs(prometheus.md): add new values to prometheus

* docs(prometheus.md): document adding custom metrics on prometheus

* bump: version 1.56.5 → 1.56.6
2025-01-01 07:41:50 -08:00
Ishaan Jaff
a39cac313c (Feat) - Add PagerDuty Alerting Integration (#7478)
* define basic types

* fix verbose_logger.exception statement

* fix basic alerting

* test pager duty alerting

* test_pagerduty_alerting_high_failure_rate

* PagerDutyAlerting

* async_log_failure_event

* use pre_call_hook

* add _request_is_completed helper util

* update AlertingConfig

* rename PagerDutyInternalEvent

* _send_alert_if_thresholds_crossed

* use pagerduty as _custom_logger_compatible_callbacks_literal

* fix slack alerting imports

* fix imports in slack alerting

* PagerDutyAlerting

* fix _load_alerting_settings

* test_pagerduty_hanging_request_alerting

* working pager duty alerting

* fix linting

* doc pager duty alerting

* update hanging_response_handler

* fix import location

* update failure_threshold

* update async_pre_call_hook

* docs pagerduty

* test - callback_class_str_to_classType

* fix linting errors

* fix linting + testing error

* PagerDutyAlerting

* test_pagerduty_hanging_request_alerting

* fix unused imports

* docs pager duty

* @pytest.mark.flaky(retries=6, delay=2)

* test_model_info_bedrock_converse_enforcement
2025-01-01 07:12:51 -08:00
Daniel Ko
fcc4163f2b Added missing quote (#7481) 2024-12-31 23:23:49 -08:00
Krish Dholakia
39a11ad272 Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493)
* fix(langfuse_prompt_management.py): migrate dynamic logging to langfuse custom logger compatible class

* fix(langfuse_prompt_management.py): support failure callback logging to langfuse as well

* feat(proxy_server.py): support setting custom tokenizer on config.yaml

Allows customizing value for `/utils/token_counter`

* fix(proxy_server.py): fix linting errors

* test: skip if file not found

* style: cleanup unused import

* docs(configs.md): add docs on setting custom tokenizer
2024-12-31 23:18:41 -08:00
Ishaan Jaff
f35a9fde44 (docs) Add docs on using Vertex with Fine Tuning APIs (#7491)
* docs add Overview for vertex endpoints

* docs add vertex ft api to docs

* Advanced use case - Passing `adapter_size` to the Vertex AI API
2024-12-31 18:50:18 -08:00
Ishaan Jaff
f0ed02d3ee doc on streaming usage litellm proxy 2024-12-30 21:06:34 -08:00
Ishaan Jaff
ab9fe490fc localeCompare 2024-12-28 20:32:49 -08:00
Krrish Dholakia
32c96f33c6 docs(index.md): fix doc link 2024-12-28 20:28:50 -08:00
Krish Dholakia
75b5754376 Litellm dev 12 28 2024 p1 (#7463)
* refactor(utils.py): migrate amazon titan config to base config

* refactor(utils.py): refactor bedrock meta invoke model translation to use base config

* refactor(utils.py): move bedrock ai21 to base config

* refactor(utils.py): move bedrock cohere to base config

* refactor(utils.py): move bedrock mistral to use base config

* refactor(utils.py): move all provider optional param translations to using a config

* docs(clientside_auth.md): clarify how to pass vertex region to litellm proxy

* fix(utils.py): handle scenario where custom llm provider is none / empty

* fix: fix get config

* test(test_otel_load_tests.py): widen perf margin

* fix(utils.py): fix get provider config check to handle custom llm's

* fix(utils.py): fix check
2024-12-28 20:26:00 -08:00
Krrish Dholakia
c53ec60edd docs(index.md): add deepgram to release notes 2024-12-28 20:24:55 -08:00
Krrish Dholakia
2dcd8b8ed7 docs(deepgram.md): add table clarifying supported openai endpoint 2024-12-28 20:21:31 -08:00
Ishaan Jaff
cd42d5a995 doc update order 2024-12-28 20:20:12 -08:00
Krrish Dholakia
b1dd8b1ae8 docs(deepgram.md): add deepgram model support to docs 2024-12-28 20:19:12 -08:00
Ishaan Jaff
62919d72de update release note 2024-12-28 20:15:30 -08:00
Ishaan Jaff
4107a86fe5 test_e2e_batches_files 2024-12-28 19:54:04 -08:00
Krrish Dholakia
271fad6a19 docs(spending_monitoring.md): add section on disabling spend logs to db 2024-12-28 19:48:50 -08:00
Ishaan Jaff
ccdf263ad1 update clean up jobs 2024-12-28 19:45:19 -08:00
Krrish Dholakia
28f8a6c636 docs(spend_monitoring.md): cleanup doc 2024-12-28 19:42:03 -08:00
Krish Dholakia
9150722a00 Litellm dev 12 28 2024 p2 (#7458)
* docs(sidebar.js): docs for support model access groups for wildcard routes

* feat(key_management_endpoints.py): add check if user is premium_user when adding model access group for wildcard route

* refactor(docs/): make control model access a root-level doc in proxy sidebar

easier to discover how to control model access on litellm

* docs: more cleanup

* feat(fireworks_ai/): add document inlining support

Enables user to call non-vision models with images/pdfs/etc.

* test(test_fireworks_ai_translation.py): add unit testing for fireworks ai transform inline helper util

* docs(docs/): add document inlining details to fireworks ai docs

* feat(fireworks_ai/): allow user to dynamically disable auto add transform inline

allows client-side disabling of this feature for proxy users

* feat(fireworks_ai/): return 'supports_vision' and 'supports_pdf_input' true on all fireworks ai models

now true as fireworks ai supports document inlining

* test: fix tests

* fix(router.py): add unit testing for _is_model_access_group_for_wildcard_route
2024-12-28 19:38:06 -08:00
Ishaan Jaff
4825e3eee5 (Bug Fix) Add health check support for realtime models (#7453)
* add mode: realtime

* add _realtime_health_check

* test_realtime_health_check

* azure _realtime_health_check

* _realtime_health_check

* Realtime Models

* fix code quality
2024-12-28 18:15:00 -08:00
Ishaan Jaff
213e2ae014 docs spend monitoring (#7461) 2024-12-28 16:39:24 -08:00
Ishaan Jaff
9d66196824 docs release notes 2024-12-27 21:41:21 -08:00
Ishaan Jaff
ee41ff159d add keywords 2024-12-27 21:39:46 -08:00
Ishaan Jaff
93bb558781 v1.56.3 release notes 2024-12-27 21:36:49 -08:00
Ishaan Jaff
f0c82943a0 docs guardrails 2024-12-27 16:31:03 -08:00
Ishaan Jaff
705efac0ff docs add guardrail spec 2024-12-27 15:47:43 -08:00
Ishaan Jaff
3b27fbcad4 docs update gemini/ link 2024-12-27 15:32:50 -08:00
Igor Ribeiro Lima
5ac2d98557 Add Gemini embedding doc (#7436) 2024-12-27 15:27:26 -08:00
Ishaan Jaff
6ec5ed8b3c (Feat) Log Guardrails run, guardrail response on logging integrations (#7445)
* add guardrail_information to SLP

* use standard_logging_guardrail_information

* track StandardLoggingGuardrailInformation

* use log_guardrail_information

* use log_guardrail_information

* docs guardrails

* docs guardrails

* update quick start

* fix presidio logging for sync functions

* update Guardrail type

* enforce add_standard_logging_guardrail_information_to_request_data

* update gd docs
2024-12-27 15:01:56 -08:00
Krrish Dholakia
0774fc71ce docs(index.md): new release notes 2024-12-26 22:01:29 -08:00
Krish Dholakia
f30260343b Litellm dev 12 26 2024 p3 (#7434)
* build(model_prices_and_context_window.json): update groq models to specify 'supports_vision' parameter

Closes https://github.com/BerriAI/litellm/issues/7433

* docs(groq.md): add groq vision example to docs

Closes https://github.com/BerriAI/litellm/issues/7433

* fix(prometheus.py): refactor self.litellm_proxy_failed_requests_metric to use label factory

* feat(prometheus.py): new 'litellm_proxy_failed_requests_by_tag_metric'

allows tracking failed requests by tag on proxy

* fix(prometheus.py): fix exception logging

* feat(prometheus.py): add new 'litellm_request_total_latency_by_tag_metric'

enables tracking latency by use-case

* feat(prometheus.py): add new llm api latency by tag metric

* feat(prometheus.py): new litellm_deployment_latency_per_output_token_by_tag metric

allows tracking deployment latency by tag

* fix(prometheus.py): refactor 'litellm_requests_metric' to use enum values + label factory

* feat(prometheus.py): new litellm_proxy_total_requests_by_tag metric

allows tracking total requests by tag

* feat(prometheus.py): new metric litellm_deployment_successful_fallbacks_by_tag

allows tracking deployment fallbacks by tag

* fix(prometheus.py): new 'litellm_deployment_failed_fallbacks_by_tag' metric

allows tracking failed fallbacks on deployment by custom tag

* test: fix test

* test: rename test to run earlier

* test: skip flaky test
2024-12-26 21:21:16 -08:00
Krish Dholakia
d6a2beb342 Support budget/rate limit tiers for keys (#7429)
* feat(proxy/utils.py): get associated litellm budget from db in combined_view for key

allows user to create rate limit tiers and associate those to keys

* feat(proxy/_types.py): update the value of key-level tpm/rpm/model max budget metrics with the associated budget table values if set

allows rate limit tiers to be easily applied to keys

* docs(rate_limit_tiers.md): add doc on setting rate limit / budget tiers

make feature discoverable

* feat(key_management_endpoints.py): return litellm_budget_table value in key generate

make it easy for user to know associated budget on key creation

* fix(key_management_endpoints.py): document 'budget_id' param in `/key/generate`

* docs(key_management_endpoints.py): document budget_id usage

* refactor(budget_management_endpoints.py): refactor budget endpoints into separate file - makes it easier to run documentation testing against it

* docs(test_api_docs.py): add budget endpoints to ci/cd doc test + add missing param info to docs

* fix(customer_endpoints.py): use new pydantic obj name

* docs(user_management_heirarchy.md): add simple doc explaining teams/keys/org/users on litellm

* Litellm dev 12 26 2024 p2 (#7432)

* (Feat) Add logging for `POST v1/fine_tuning/jobs`  (#7426)

* init commit ft jobs logging

* add ft logging

* add logging for FineTuningJob

* simple FT Job create test

* (docs) - show all supported Azure OpenAI endpoints in overview  (#7428)

* azure batches

* update doc

* docs azure endpoints

* docs endpoints on azure

* docs azure batches api

* docs azure batches api

* fix(key_management_endpoints.py): fix key update to actually work

* test(test_key_management.py): add e2e test asserting ui key update call works

* fix: proxy/_types - fix linting erros

* test: update test

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* fix: test

* fix(parallel_request_limiter.py): enforce tpm/rpm limits on key from tiers

* fix: fix linting errors

* test: fix test

* fix: remove unused import

* test: update test

* docs(customer_endpoints.py): document new model_max_budget param

* test: specify unique key alias

* docs(budget_management_endpoints.py): document new model_max_budget param

* test: fix test

* test: fix tests

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-12-26 19:05:27 -08:00
Ishaan Jaff
12e8fe72f9 docs guardrail params (#7430) 2024-12-26 11:08:47 -08:00
Ishaan Jaff
4df3b6c075 (docs) - show all supported Azure OpenAI endpoints in overview (#7428)
* azure batches

* update doc

* docs azure endpoints

* docs endpoints on azure

* docs azure batches api

* docs azure batches api
2024-12-26 09:01:41 -08:00
Krrish Dholakia
6dd15183a9 docs: cleanup doc 2024-12-25 21:23:42 -08:00
Krrish Dholakia
0b0e043fc4 docs(fireworks_ai.md): add audio transcription to fireworks ai doc 2024-12-25 21:22:51 -08:00
Ishaan Jaff
c9f61b3d23 docs - batches cost tracking (#7422) 2024-12-25 20:13:26 -08:00
Krish Dholakia
7af1f8a0c7 Litellm dev 12 25 2025 p2 (#7420)
* test: add new test image embedding to base llm unit tests

Addresses https://github.com/BerriAI/litellm/issues/6515

* fix(bedrock/embed/multimodal-embeddings): strip data prefix from image urls for bedrock multimodal embeddings

Fix https://github.com/BerriAI/litellm/issues/6515

* feat: initial commit for fireworks ai audio transcription support

Relevant issue: https://github.com/BerriAI/litellm/issues/7134

* test: initial fireworks ai test

* feat(fireworks_ai/): implemented fireworks ai audio transcription config

* fix(utils.py): register fireworks ai audio transcription config, in config manager

* fix(utils.py): add fireworks ai param translation to 'get_optional_params_transcription'

* refactor(fireworks_ai/): define text completion route with model name handling

moves model name handling to specific fireworks routes, as required by their api

* refactor(fireworks_ai/chat): define transform_Request - allows fixing model if accounts/ is missing

* fix: fix linting errors

* fix: fix linting errors

* fix: fix linting errors

* fix: fix linting errors

* fix(handler.py): fix linting errors

* fix(main.py): fix tgai text completion route

* refactor(together_ai/completion): refactors together ai text completion route to just use provider transform request

* refactor: move test_fine_tuning_api out of local_testing

reduces local testing ci/cd time
2024-12-25 18:35:34 -08:00
Ishaan Jaff
55b5a1221c fix docs warning (#7419) 2024-12-25 16:42:14 -08:00
Ishaan Jaff
5612103ea3 (feat) Support Dynamic Params for guardrails (#7415)
* update CustomGuardrail

* unit test custom guardrails

* add dynamic params for aporia

* add dynamic params to bedrock guard

* add dynamic params for all guardrails

* fix linting

* fix should_run_guardrail

* _validate_premium_user

* update guardrail doc

* doc update

* update code q

* should_run_guardrail
2024-12-25 16:07:29 -08:00
Ishaan Jaff
43670545b4 update docs base docker 2024-12-25 15:51:19 -08:00
Ishaan Jaff
be6d58ab37 docs files api 2024-12-24 20:46:43 -08:00
Ishaan Jaff
0627450808 (feat) /batches Add support for using /batches endpoints in OAI format (#7402)
* run azure testing on ci/cd

* update docs on azure batches endpoints

* add input azure.jsonl

* refactor - use separate file for batches endpoints

* fixes for passing custom llm provider to /batch endpoints

* pass custom llm provider to files endpoints

* update azure batches doc

* add info for azure batches api

* update batches endpoints

* use simple helper for raising proxy exception

* update config.yml

* fix imports

* update tests

* use existing settings

* update env var used

* update configs

* update config.yml

* update ft testing
2024-12-24 16:58:05 -08:00
Krish Dholakia
8fe1356406 LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394)
* build(model_prices_and_context_window.json): add gemini-1.5-flash context caching

* fix(context_caching/transformation.py): just use last identified cache point

Fixes https://github.com/BerriAI/litellm/issues/6738

* fix(context_caching/transformation.py): pick first contiguous block - handles system message error from google

Fixes https://github.com/BerriAI/litellm/issues/6738

* fix(vertex_ai/gemini/): track context caching tokens

* refactor(gemini/): place transformation.py inside `chat/` folder

make it easy for user to know we support the equivalent endpoint

* fix: fix import

* refactor(vertex_ai/): move vertex_ai cost calc inside vertex_ai/ folder

make it easier to see cost calculation logic

* fix: fix linting errors

* fix: fix circular import

* feat(gemini/cost_calculator.py): support gemini context caching cost calculation

generifies anthropic's cost calculation function and uses it across anthropic + gemini

* build(model_prices_and_context_window.json): add cost tracking for gemini-1.5-flash-002 w/ context caching

Closes https://github.com/BerriAI/litellm/issues/6891

* docs(gemini.md): add gemini context caching architecture diagram

make it easier for user to understand how context caching works

* docs(gemini.md): link to relevant gemini context caching code

* docs(gemini/context_caching): add readme in github, make it easy for dev to know context caching is supported + where to go for code

* fix(llm_cost_calc/utils.py): handle gemini 128k token diff cost calc scenario

* fix(deepseek/cost_calculator.py): support deepseek context caching cost calculation

* test: fix test
2024-12-23 22:02:52 -08:00
Ishaan Jaff
905e89bf60 update release notes 2024-12-23 21:48:33 -08:00
Ishaan Jaff
918e5a3b67 update release notes 2024-12-23 21:43:47 -08:00
Ishaan Jaff
2690d7485a release notes 2024-12-23 21:38:56 -08:00
Ishaan Jaff
d3e163cae3 docs batches 2024-12-23 21:24:06 -08:00
Ishaan Jaff
eb9108cf51 docs add files to supported endpoints 2024-12-23 20:51:34 -08:00