Commit graph

4702 commits

Author SHA1 Message Date
Krish Dholakia
907bcd3a62
Litellm dev 01 08 2025 p1 (#7640)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* feat(ui_sso.py): support reading team ids from sso token

* feat(ui_sso.py): working upsert sso user teams membership in litellm - if team exists

Adds user to relevant teams, if user is part of teams and team exists on litellm

* fix(ui_sso.py): safely handle add team member task

* build(ui/): support setting team id when creating team on UI

* build(ui/): teams.tsx

allow setting team id on ui

* build(circle_ci/requirements.txt): add fastapi-sso to ci/cd testing

* fix: fix linting errors
2025-01-08 22:08:20 -08:00
Krish Dholakia
1e3370f3cb
LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 (#7643)
* fix(streaming_chunk_builder_utils.py): add test for groq tool calling + streaming + combine chunks

Addresses https://github.com/BerriAI/litellm/issues/7621

* fix(streaming_utils.py): fix modelresponseiterator for openai like chunk parser

ensures chunk parser uses the correct tool call id when translating the chunk

 Fixes https://github.com/BerriAI/litellm/issues/7621

* build(model_hub.tsx): display cost pricing on model hub

* build(model_hub.tsx): show cost per token pricing + complete model information

* fix(types/utils.py): fix usage object handling
2025-01-08 19:45:19 -08:00
Ishaan Jaff
48d4f79206
fix is llm api route check (#7631) 2025-01-08 18:45:59 -08:00
Krish Dholakia
0ffc5379ea
Litellm dev 01 07 2025 p2 (#7622)
* build(ui/): update ui

* fix: drop unsupported non-whitespace characters for real when calling… (#7484)

* fix: drop unsupported non-whitespace characters for real when calling anthropic with stop sequences

* test: add parameterized test for _map_stop_sequences method in AnthropicConfig

---------

Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
2025-01-08 16:56:39 -08:00
Ishaan Jaff
fd0a03f719
(feat) - allow building litellm proxy from pip package (#7633)
* fix working build from pip

* add tests for proxy_build_from_pip_tests

* doc clean up for deployment

* docs cleanup

* docs build from pip

* fix cd docker/build_from_pip
2025-01-08 16:36:57 -08:00
Krish Dholakia
a187cee538
Litellm dev 01 07 2025 p3 (#7635)
* fix(__init__.py): fix mistral large tool calling

map bedrock mistral large to converse endpoint

Fixes https://github.com/BerriAI/litellm/issues/7521

* braintrust logging: respect project_id, add more metrics + more (#7613)

* braintrust logging: respect project_id, add more metrics

* braintrust logger: improve json formatting

* braintrust logger: add test for passing specific project_id

* rm unneeded import

* braintrust logging: rm unneeded var in tets

* add project_name

* update docs

---------

Co-authored-by: H <no@email.com>

---------

Co-authored-by: hi019 <65871571+hi019@users.noreply.github.com>
Co-authored-by: H <no@email.com>
2025-01-08 11:46:24 -08:00
Krish Dholakia
07c5f136f1
fix(utils.py): fix select tokenizer for custom tokenizer (#7599)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* fix(utils.py): fix select tokenizer for custom tokenizer

* fix(router.py): fix 'utils/token_counter' endpoint
2025-01-07 22:37:09 -08:00
Ishaan Jaff
081826a5d6
(Feat) soft budget alerts on keys (#7623)
* class WebhookEvent(CallInfo):
Add

* handle soft budget alerts

* handle soft budget

* fix budget alerts

* fix CallInfo

* fix _get_user_info_str

* test_soft_budget_alerts

* test_soft_budget_alert
2025-01-07 21:36:34 -08:00
Krish Dholakia
4e69711411
Litellm dev 01 07 2025 p1 (#7618)
* fix(main.py): pass custom llm provider on litellm logging provider update

* fix(cost_calculator.py): don't append provider name to return model if existing llm provider

Fixes https://github.com/BerriAI/litellm/issues/7607

* fix(prometheus_services.py): fix prometheus system health error logging

Fixes https://github.com/BerriAI/litellm/issues/7611
2025-01-07 21:22:31 -08:00
Krish Dholakia
fef7839e8a
Litellm dev 01 06 2025 p1 (#7594)
* fix(custom_logger.py): expose new 'async_get_chat_completion_prompt' event hook

* fix(custom_logger.py): langfuse_prompt_management.py

remove 'headers' from custom logger 'async_get_chat_completion_prompt' and 'get_chat_completion_prompt' event hooks

* feat(router.py): expose new function for prompt management based routing

* feat(router.py): partial working router prompt factory logic

allows load balanced model to be used for model name w/ langfuse prompt management call

* feat(router.py): fix prompt management with load balanced model group

* feat(langfuse_prompt_management.py): support reading in openai params from langfuse

enables user to define optional params on langfuse vs. client code

* test(test_Router.py): add unit test for router based langfuse prompt management

* fix: fix linting errors
2025-01-06 21:26:21 -08:00
Ishaan Jaff
819079f23b
(proxy perf improvement) - remove redundant .copy() operation (#7564)
* latency fix proxy

* remove useless copy in add_key_level_controls
2025-01-06 20:36:47 -08:00
Ishaan Jaff
6125ba1e2b
(Feat) - allow including dd-trace in litellm base image (#7587)
* introduce USE_DDTRACE=true

* update dd tracer

* update

* bump dd trace

* use og slim image

* DD tracing

* fix _init_dd_tracer
2025-01-06 17:27:09 -08:00
Ishaan Jaff
0b5c1392f7
fix _return_user_api_key_auth_obj (#7591) 2025-01-06 16:43:14 -08:00
Ishaan Jaff
2bf20ebfdf
latency fix proxy (#7563)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-04 20:18:32 -08:00
Krish Dholakia
34a9833b85
Support deleting keys by key_alias (#7552)
* feat(key_management_endpoints.py): allow deleting keys based on key alias

easier for proxy admin to delete known bad key

* fix(key_management_event_hooks.py): fix linting error

* docs(key_management_endpoints.py): document new key_aliases param

* fix(key_management_endpoints.py): return deleted keys to user

fixes return when passing key aliases
2025-01-04 19:41:48 -08:00
Ishaan Jaff
d74fa39454
fix [PROXY] returned data from litellm_pre_call_util (#7558) 2025-01-04 18:47:36 -08:00
Krish Dholakia
f1540ceeab
Create and view organizations + assign org admins on the Proxy UI (#7557)
* feat: initial commit for new 'organizations' tab on ui

* build(ui/): create generic card for rendering complete org data table

can be reused in teams as well

simplifies things

* build(ui/): display created orgs on ui

* build(ui/): support adding orgs via UI

* build(ui/): add org in selection dropdown

* build(organizations.tsx): allow assigning org admins

* build(ui/): show org members on ui

* build(ui/): cleanup + show actual models on org dropdown

* build(ui/): explain user roles within organization
2025-01-04 17:31:24 -08:00
Ishaan Jaff
46d9d29bff
(Feat) Hashicorp Secret Manager - Allow storing virtual keys in secret manager (#7549)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* use a base abstract class

* async_write_secret for hcorp

* fix hcorp

* async_write_secret for hashicopr secret manager

* store virtual keys in hcorp

* add delete secret

* test_hashicorp_secret_manager_write_secret

* test_hashicorp_secret_manager_delete_secret

* docs Supported Secret Managers

* docs storing keys in hcorp

* docs hcorp

* docs secret managers

* test_key_generate_with_secret_manager_call

* fix unused imports
2025-01-04 11:35:59 -08:00
Krish Dholakia
d43d83f9ef
feat(router.py): support request prioritization for text completion c… (#7540)
* feat(router.py): support request prioritization for text completion calls

* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`

Fixes https://github.com/BerriAI/litellm/issues/7485

* fix: fix linting errors

* fix: fix linting error

* test(test_router_helper_utils.py): add direct test for '_schedule_factory'

Fixes code qa test
2025-01-03 19:35:44 -08:00
Ishaan Jaff
716efd5fad
(fix proxy perf) use _read_request_body instead of ast.literal_eval to get better performance (#7545)
* fix ast literal eval

* run ci/cd again
2025-01-03 17:48:32 -08:00
Ishaan Jaff
1bb4941036
[Feature]: - allow print alert log to console (#7534)
* update send_to_webhook

* test_print_alerting_payload_warning

* add alerting_args spec

* test_alerting.py
2025-01-03 17:48:13 -08:00
Krish Dholakia
6843f3a2bb
Revert "fix: add missing parameters order, limit, before, and after in get_as…" (#7542)
This reverts commit 4b0505dffd.
2025-01-03 16:32:12 -08:00
Ishaan Jaff
02875d4ae8
(fix) aiohttp_openai/ route - get to 1K RPS on single instance (#7539)
* ClientSession

* re use client_session

* _init_client_session

* fix aiohttp
2025-01-03 15:12:17 -08:00
Jean Carlo de Souza
4b0505dffd
fix: add missing parameters order, limit, before, and after in get_assistants method for openai (#7537)
- Ensured that `before` and `after` parameters are only passed when provided to avoid AttributeError.
- Implemented safe access using default values for `before` and `after` to prevent missing attribute issues.
- Added consistent handling of `order` and `limit` to improve flexibility and robustness in API calls.
2025-01-03 14:41:54 -08:00
Krish Dholakia
33f301ec86
Litellm dev 01 02 2025 p1 (#7516)
* fix(redact_messages.py): fix redact messages for non-model response input to be dictionary

fixes issue with otel logging when message redaction is enabled

* fix(proxy_server.py): fix langfuse key leak in exception string

* test: fix test

* test: fix test

* test: fix tests
2025-01-03 14:40:57 -08:00
Krish Dholakia
f6698e871f
Fix langfuse prompt management on proxy (#7535)
* fix(types/utils.py): support langfuse + humanloop routes on llm router

* fix(main.py): remove acompletion elif block

just await if coroutine returned
2025-01-03 12:42:37 -08:00
Ishaan Jaff
6e31bcf5a7
fix - access metadata (#7523) 2025-01-03 10:02:10 -08:00
Ishaan Jaff
d861aa8ff3
(perf) use aiohttp for custom_openai (#7514)
* use aiohttp handler

* BaseLLMAIOHTTPHandler

* use CustomOpenAIChatConfig

* CustomOpenAIChatConfig

* CustomOpenAIChatConfig

* fix linting

* AiohttpOpenAIChatConfig

* fix order

* aiohttp_openai
2025-01-02 22:15:17 -08:00
Krish Dholakia
07fc394072
Litellm dev 01 01 2025 p1 (#7498)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 11s
* refactor(prometheus.py): refactor to remove `_tag` metrics and incorporate in regular metrics

* fix(prometheus.py): handle label values not set in enum values

* feat(prometheus.py): working e2e custom metadata labels

* docs(prometheus.md): update docs to clarify how custom metrics would work

* test(test_prometheus_unit_tests.py): fix test

* test: add unit testing
2025-01-01 18:59:28 -08:00
Krish Dholakia
0120176541
Litellm dev 12 30 2024 p2 (#7495)
* test(azure_openai_o1.py): initial commit with testing for azure openai o1 preview model

* fix(base_llm_unit_tests.py): handle azure o1 preview response format tests

skip as o1 on azure doesn't support tool calling yet

* fix: initial commit of azure o1 handler using openai caller

simplifies calling + allows fake streaming logic alr. implemented for openai to just work

* feat(azure/o1_handler.py): fake o1 streaming for azure o1 models

azure does not currently support streaming for o1

* feat(o1_transformation.py): support overriding 'should_fake_stream' on azure/o1 via 'supports_native_streaming' param on model info

enables user to toggle on when azure allows o1 streaming without needing to bump versions

* style(router.py): remove 'give feedback/get help' messaging when router is used

Prevents noisy messaging

Closes https://github.com/BerriAI/litellm/issues/5942

* fix(types/utils.py): handle none logprobs

Fixes https://github.com/BerriAI/litellm/issues/328

* fix(exception_mapping_utils.py): fix error str unbound error

* refactor(azure_ai/): move to openai_like chat completion handler

allows for easy swapping of api base url's (e.g. ai.services.com)

Fixes https://github.com/BerriAI/litellm/issues/7275

* refactor(azure_ai/): move to base llm http handler

* fix(azure_ai/): handle differing api endpoints

* fix(azure_ai/): make sure all unit tests are passing

* fix: fix linting errors

* fix: fix linting errors

* fix: fix linting error

* fix: fix linting errors

* fix(azure_ai/transformation.py): handle extra body param

* fix(azure_ai/transformation.py): fix max retries param handling

* fix: fix test

* test(test_azure_o1.py): fix test

* fix(llm_http_handler.py): support handling azure ai unprocessable entity error

* fix(llm_http_handler.py): handle sync invalid param error for azure ai

* fix(azure_ai/): streaming support with base_llm_http_handler

* fix(llm_http_handler.py): working sync stream calls with unprocessable entity handling for azure ai

* fix: fix linting errors

* fix(llm_http_handler.py): fix linting error

* fix(azure_ai/): handle cohere tool call invalid index param error
2025-01-01 18:57:29 -08:00
Ishaan Jaff
cf60444916
(Feat) Add support for reading secrets from Hashicorp vault (#7497)
* HashicorpSecretManager

* test_hashicorp_secret_managerv

* use 1 helper initialize_secret_manager

* add HASHICORP_VAULT

* working config

* hcorp read_secret

* HashicorpSecretManager

* add secret_manager_testing

* use 1 folder for secret manager testing

* test_hashicorp_secret_manager_get_secret

* HashicorpSecretManager

* docs HCP secrets

* update folder name

* docs hcorp secret manager

* remove unused imports

* add conftest.py

* fix tests

* docs document env vars
2025-01-01 18:35:05 -08:00
Ishaan Jaff
38bfefa6ef
(Feat) - LiteLLM Use UsernamePasswordCredential for Azure OpenAI (#7496)
* add get_azure_ad_token_from_username_password

* docs azure use username / password for auth

* update doc

* get_azure_ad_token_from_username_password

* test test_get_azure_ad_token_from_username_password
2025-01-01 14:11:27 -08:00
Ishaan Jaff
2979b8301c
(feat) POST /fine_tuning/jobs support passing vertex specific hyper params (#7490)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* update convert_openai_request_to_vertex

* test_create_vertex_fine_tune_jobs_mocked

* fix order of methods

* update LiteLLMFineTuningJobCreate

* update OpenAIFineTuningHyperparameters

* update vertex hyper params in response

* _transform_openai_hyperparameters_to_vertex_hyperparameters

* supervised_tuning_spec["hyperParameters"] fix

* fix mapping for ft params testing

* docs fine tuning apis

* fix test_convert_basic_openai_request_to_vertex_request

* update hyperparams for create fine tuning

* fix linting

* test_create_vertex_fine_tune_jobs_mocked_with_hyperparameters

* run ci/cd again

* test_convert_basic_openai_request_to_vertex_request
2025-01-01 07:44:48 -08:00
Ishaan Jaff
03b1db5a7d
(Feat) - Add PagerDuty Alerting Integration (#7478)
* define basic types

* fix verbose_logger.exception statement

* fix basic alerting

* test pager duty alerting

* test_pagerduty_alerting_high_failure_rate

* PagerDutyAlerting

* async_log_failure_event

* use pre_call_hook

* add _request_is_completed helper util

* update AlertingConfig

* rename PagerDutyInternalEvent

* _send_alert_if_thresholds_crossed

* use pagerduty as _custom_logger_compatible_callbacks_literal

* fix slack alerting imports

* fix imports in slack alerting

* PagerDutyAlerting

* fix _load_alerting_settings

* test_pagerduty_hanging_request_alerting

* working pager duty alerting

* fix linting

* doc pager duty alerting

* update hanging_response_handler

* fix import location

* update failure_threshold

* update async_pre_call_hook

* docs pagerduty

* test - callback_class_str_to_classType

* fix linting errors

* fix linting + testing error

* PagerDutyAlerting

* test_pagerduty_hanging_request_alerting

* fix unused imports

* docs pager duty

* @pytest.mark.flaky(retries=6, delay=2)

* test_model_info_bedrock_converse_enforcement
2025-01-01 07:12:51 -08:00
Krish Dholakia
39cbd9d878
Litellm dev 12 31 2024 p1 (#7488)
* fix(internal_user_endpoints.py): fix team list sort - handle team_alias being set + None

* fix(key_management_endpoints.py): allow team admin to create key for member via admin ui

Fixes https://github.com/BerriAI/litellm/issues/7482

* fix(proxy_server.py): allow querying info on specific model group via `/model_group/info`

allows client-side user to get model info from proxy

* fix(proxy_server.py): add docstring on `/model_group/info` showing how to filter by model name

* test(test_proxy_utils.py): add unit test for returning model group info filtered

* fix(proxy_server.py): fix query param

* fix(test_Get_model_info.py): handle no whitelisted bedrock modells
2024-12-31 23:21:51 -08:00
Krish Dholakia
080de89cfb
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493)
* fix(langfuse_prompt_management.py): migrate dynamic logging to langfuse custom logger compatible class

* fix(langfuse_prompt_management.py): support failure callback logging to langfuse as well

* feat(proxy_server.py): support setting custom tokenizer on config.yaml

Allows customizing value for `/utils/token_counter`

* fix(proxy_server.py): fix linting errors

* test: skip if file not found

* style: cleanup unused import

* docs(configs.md): add docs on setting custom tokenizer
2024-12-31 23:18:41 -08:00
Ishaan Jaff
859f6e1635
(fix) v1/fine_tuning/jobs with VertexAI (#7487)
* update convert_openai_request_to_vertex

* test_create_vertex_fine_tune_jobs_mocked
2024-12-31 15:09:56 -08:00
Krish Dholakia
347779b813
Litellm dev 12 30 2024 p1 (#7480)
* test(azure_openai_o1.py): initial commit with testing for azure openai o1 preview model

* fix(base_llm_unit_tests.py): handle azure o1 preview response format tests

skip as o1 on azure doesn't support tool calling yet

* fix: initial commit of azure o1 handler using openai caller

simplifies calling + allows fake streaming logic alr. implemented for openai to just work

* feat(azure/o1_handler.py): fake o1 streaming for azure o1 models

azure does not currently support streaming for o1

* feat(o1_transformation.py): support overriding 'should_fake_stream' on azure/o1 via 'supports_native_streaming' param on model info

enables user to toggle on when azure allows o1 streaming without needing to bump versions

* style(router.py): remove 'give feedback/get help' messaging when router is used

Prevents noisy messaging

Closes https://github.com/BerriAI/litellm/issues/5942

* test: fix azure o1 test

* test: fix tests

* fix: fix test
2024-12-30 21:52:52 -08:00
Ishaan Jaff
a003af6c04
(fix) litellm.amoderation - support using model=openai/omni-moderation-latest, model=omni-moderation-latest, model=None (#7475)
* test_moderation_endpoint

* fix litellm.amoderation
2024-12-30 09:42:51 -08:00
Krish Dholakia
cfb6890b9f
Litellm dev 12 28 2024 p2 (#7458)
* docs(sidebar.js): docs for support model access groups for wildcard routes

* feat(key_management_endpoints.py): add check if user is premium_user when adding model access group for wildcard route

* refactor(docs/): make control model access a root-level doc in proxy sidebar

easier to discover how to control model access on litellm

* docs: more cleanup

* feat(fireworks_ai/): add document inlining support

Enables user to call non-vision models with images/pdfs/etc.

* test(test_fireworks_ai_translation.py): add unit testing for fireworks ai transform inline helper util

* docs(docs/): add document inlining details to fireworks ai docs

* feat(fireworks_ai/): allow user to dynamically disable auto add transform inline

allows client-side disabling of this feature for proxy users

* feat(fireworks_ai/): return 'supports_vision' and 'supports_pdf_input' true on all fireworks ai models

now true as fireworks ai supports document inlining

* test: fix tests

* fix(router.py): add unit testing for _is_model_access_group_for_wildcard_route
2024-12-28 19:38:06 -08:00
Ishaan Jaff
5c1e8b60d4 ui new build 2024-12-28 18:14:36 -08:00
Ishaan Jaff
3158dcf88b
(Security fix) - Upgrade to fastapi==0.115.5 (#7447)
* fix upgrade fast api

* bump fastapi

* update a proxy startup tests

* remove unused test file

* update tests

* bump fast api
2024-12-28 17:08:19 -08:00
Krish Dholakia
0924df4971
Litellm dev 12 27 2024 p2 1 (#7449)
* fix(azure_ai/transformation.py): route ai.services.azure calls to the azure provider route

requires token to be passed in as 'api-key'

Closes https://github.com/BerriAI/litellm/issues/7275

* fix(key_management_endpoints.py): enforce user is member of team, if team_id set and team_id exists in team table

* fix(key_management_endpoints.py): handle assigned_user_id = none

* feat(create_key_button.tsx): allow assigning keys to other users

allows proxy admin to easily assign other people keys

* build(create_key_button.tsx): fix error message display

don't swallow the error message for key creation failure

* build(create_key_button.tsx): allow proxy admin to edit team id

* build(create_key_button.tsx): allow proxy admin to assign keys to other users

* build(edit_user.tsx): clarify how 'user budgets' are applied

* test: remove dup test

* fix(key_management_endpoints.py): don't raise error if team not in db

'

* test: fix test
2024-12-27 20:02:32 -08:00
Krish Dholakia
67b39bacf7
LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 (#7448)
* feat(main.py): mock_response() - support 'litellm.ContextWindowExceededError' in mock response

enabled quicker router/fallback/proxy debug on context window errors

* feat(exception_mapping_utils.py): extract special litellm errors from error str if calling `litellm_proxy/` as provider

Closes https://github.com/BerriAI/litellm/issues/7259

* fix(user_api_key_auth.py): specify 'Received Proxy Server Request' is span kind server

Closes https://github.com/BerriAI/litellm/issues/7298
2024-12-27 19:04:39 -08:00
Ishaan Jaff
2ece919f01
(Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path} (#7427)
* init commit ft jobs logging

* add ft logging

* add logging for FineTuningJob

* simple FT Job create test

* simplify Azure fine tuning to use all methods in OAI ft

* update doc string

* add aretrieve_fine_tuning_job

* re use from litellm.proxy.utils import handle_exception_on_proxy

* fix naming

* add /fine_tuning/jobs/{fine_tuning_job_id:path}

* remove unused imports

* update func signature

* run ci/cd again

* ci/cd run again

* fix code qulity

* ci/cd run again
2024-12-27 17:01:14 -08:00
Ishaan Jaff
62753eea69
(Feat) Log Guardrails run, guardrail response on logging integrations (#7445)
* add guardrail_information to SLP

* use standard_logging_guardrail_information

* track StandardLoggingGuardrailInformation

* use log_guardrail_information

* use log_guardrail_information

* docs guardrails

* docs guardrails

* update quick start

* fix presidio logging for sync functions

* update Guardrail type

* enforce add_standard_logging_guardrail_information_to_request_data

* update gd docs
2024-12-27 15:01:56 -08:00
Ishaan Jaff
3e7794d880
(feat) /guardrails/list show guardrail info params (#7442)
* add GuardrailInfoResponse

* add list_guardrails

* test_get_guardrails_list_response
2024-12-27 14:35:00 -08:00
Krish Dholakia
d88de268dd
Litellm dev 12 26 2024 p4 (#7439)
* fix(model_dashboard.tsx): support setting model_info params - e.g. mode on ui

Closes https://github.com/BerriAI/litellm/issues/5270

* fix(lowest_tpm_rpm_v2.py): deployment rpm over limit check

fixes selection error when getting potential deployments below known tpm/rpm limit

 Fixes https://github.com/BerriAI/litellm/issues/7395

* fix(test_tpm_rpm_routing_v2.py): add unit test for https://github.com/BerriAI/litellm/issues/7395

* fix(lowest_tpm_rpm_v2.py): fix tpm key name in dict post rpm update

* test: rename test to run earlier

* test: skip flaky test
2024-12-27 12:01:42 -08:00
Krish Dholakia
40e2a95095
fix(key_management_endpoints.py): enforce user_id / team_id checks on key generate (#7437)
* fix(key_management_endpoints.py): enforce user_id / team_id checks on key generate

Fixes https://github.com/BerriAI/litellm/issues/7336

* test: fix tests
2024-12-27 10:15:48 -08:00
Ishaan Jaff
17d5ff2fa4
(fix) initializing OTEL Logging on LiteLLM Proxy - ensure OTEL logger is initialized only once (#7435)
* add otel to _custom_logger_compatible_callbacks_literal

* remove extra code

* fix _get_custom_logger_settings_from_proxy_server

* update unit tests
2024-12-26 21:17:19 -08:00