Commit graph

1648 commits

Author SHA1 Message Date
Krish Dholakia
8d4ad47ec3
fix(prometheus.py): fix setting key budget metrics (#8234)
* fix(prometheus.py): fix setting key budget metrics

ensures custom metadata works with key budget metric

this is a patch. root cause pr is written in a separate branch

* test: fix test
2025-02-04 19:15:50 -08:00
Krish Dholakia
df93debbc7
Internal User Endpoint - vulnerability fix + response type fix (#8228)
* fix(key_management_endpoints.py): fix vulnerability where a user could update another user's keys

Resolves https://github.com/BerriAI/litellm/issues/8031

* test(key_management_endpoints.py): return consistent 403 forbidden error when modifying key that doesn't belong to user

* fix(internal_user_endpoints.py): return model max budget in internal user create response

Fixes https://github.com/BerriAI/litellm/issues/7047

* test: fix test

* test: update test to handle gemini token counter change

* fix(factory.py): fix bedrock http:// handling

* docs: fix typo in lm_studio.md (#8222)

* test: fix testing

* test: fix test

---------

Co-authored-by: foreign-sub <51928805+foreign-sub@users.noreply.github.com>
2025-02-04 06:41:14 -08:00
Krish Dholakia
c17342ac5b
fix(openai/): allows 'reasoning_effort' param to be passed correctly (#8227)
* fix(openai/): allows 'reasoning_effort' param to be passed correctly

Fixes https://github.com/BerriAI/litellm/issues/8217

* test: update test to handle gemini token counter change

* fix(factory.py): fix bedrock http:// handling

* test: fix test

* test: update testing for new openai sdk
2025-02-03 22:39:10 -08:00
Ishaan Jaff
915cc064c5 fix test test_is_assemblyai_route 2025-02-03 21:58:32 -08:00
Ishaan Jaff
8fd60a420d
(Feat) - New pass through add assembly ai passthrough endpoints (#8220)
* add assembly ai pass through request

* fix assembly pass through

* fix test_assemblyai_basic_transcribe

* fix assemblyai auth check

* test_assemblyai_transcribe_with_non_admin_key

* working assembly ai test

* working assembly ai proxy route

* use helper func to pass through logging

* clean up logging assembly ai

* test: update test to handle gemini token counter change

* fix(factory.py): fix bedrock http:// handling

* add unit testing for assembly pt handler

* docs assembly ai pass through endpoint

* fix proxy_pass_through_endpoint_tests

* fix standard_passthrough_logging_object

* fix ASSEMBLYAI_API_KEY

* test test_assemblyai_proxy_route_basic_post

* test_assemblyai_proxy_route_get_transcript

* fix is is_assemblyai_route

* test_is_assemblyai_route

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-02-03 21:54:32 -08:00
Krrish Dholakia
7ddb034b31 test: update test to handle gemini token counter change 2025-02-03 18:12:53 -08:00
Krish Dholakia
c8494abdea
test(base_llm_unit_tests.py): add test to ensure drop params is respe… (#8224)
* test(base_llm_unit_tests.py): add test to ensure drop params is respected

* fix(types/prometheus.py): use typing_extensions for python3.8 compatibility

* build: add cherry picked commits
2025-02-03 16:04:44 -08:00
Krish Dholakia
97b8de17ab
LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 (#7828)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* fix(vertex_ai/gemini/transformation.py): handle 'http://' image urls

* test: add base test for `http:` url's

* fix(factory.py/get_image_details): follow redirects

allows http calls to work

* fix(codestral/): fix stream chunk parsing on last chunk of stream

* Azure ad token provider (#6917)

* Update azure.py

Added optional parameter azure ad token provider

* Added parameter to main.py

* Found token provider arg location

* Fixed embeddings

* Fixed ad token provider

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix: fix linting errors

* fix(main.py): leave out o1 route for azure ad token provider, for now

get v0 out for sync azure gpt route to begin with

* test: skip http:// test for fireworks ai

model does not support it

* refactor: cleanup dead code

* fix: revert http:// url passthrough for gemini

google ai studio raises errors

* test: fix test

---------

Co-authored-by: bahtman <anton@baht.dk>
2025-02-02 23:17:50 -08:00
Krish Dholakia
6834c5ecaf
Easier user onboarding via SSO (#8187)
* fix(ui_sso.py): use common `get_user_object` logic across jwt + ui sso auth

Allows finding users by their email, and attaching the sso user id to the user if found

* Improve Team Management flow on UI  (#8204)

* build(teams.tsx): refactor teams page to make it easier to add members to a team

make a row in table clickable -> allows user to add users to team they intended

* build(teams.tsx): make it clear user should click on team id to view team details

simplifies team management by putting team details on separate page

* build(team_info.tsx): separately show user id and user email

make it easy for user to understand the information they're seeing

* build(team_info.tsx): add back in 'add member' button

* build(team_info.tsx): working team member update on team_info.tsx

* build(team_info.tsx): enable team member delete on ui

allow user to delete accidental adds

* build(internal_user_endpoints.py): expose new endpoint for ui to allow filtering on user table

allows proxy admin to quickly find user they're looking for

* feat(team_endpoints.py): expose new team filter endpoint for ui

allows proxy admin to easily find team they're looking for

* feat(user_search_modal.tsx): allow admin to filter on users when adding new user to teams

* test: mark flaky test

* test: mark flaky test

* fix(exception_mapping_utils.py): fix anthropic text route error

* fix(ui_sso.py): handle situation when user not in db
2025-02-02 23:02:33 -08:00
Krish Dholakia
1105e35538
Complete o3 model support (#8183)
* fix(o_series_transformation.py): add 'reasoning_effort' as o series model param

Closes https://github.com/BerriAI/litellm/issues/8182

* fix(main.py): ensure `reasoning_effort` is a mapped openai param

* refactor(azure/): rename o1_[x] files to o_series_[x]

* refactor(base_llm_unit_tests.py): refactor testing for o series reasoning effort

* test(test_azure_o_series.py): have azure o series tests correctly inherit from base o series model tests

* feat(base_utils.py): support translating 'developer' role to 'system' role for non-openai providers

Makes it easy to switch from openai to anthropic

* fix: fix linting errors

* fix(base_llm_unit_tests.py): fix test

* fix(main.py): add missing param
2025-02-02 22:36:37 -08:00
Krish Dholakia
e4566d7b1c
fix(main.py): fix passing openrouter specific params (#8184)
* fix(main.py): fix passing openrouter specific params

Fixes https://github.com/BerriAI/litellm/issues/8130

* test(test_get_model_info.py): add check for region name w/ cris model

Resolves https://github.com/BerriAI/litellm/issues/8115
2025-02-02 22:23:14 -08:00
Ishaan Jaff
c0f3100934
[Bug Fix] - /vertex_ai/ was not detected as llm_api_route on pass through but vertex-ai was (#8186)
* fix mapped_pass_through_routes

* fix route checks

* update test_is_llm_api_route
2025-02-01 17:26:08 -08:00
Krish Dholakia
9e65f867ab
test: add more unit testing for team member endpoints (#8170)
* test: add more unit testing for team member add

* fix(team_endpoints.py): add validation check to prevent same user from being added to team again

prevents duplicates

* fix(team_endpoints.py): raise error if `/team/member_delete` called on member that's not in team

prevent being able to call delete on same member multiple times

* test: update initial tests

* test: fix test

* test: update test to handle no member duplication
2025-02-01 11:23:00 -08:00
Krish Dholakia
23f458d2da
Improved O3 + Azure O3 support (#8181)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* fix: support azure o3 model family for fake streaming workaround (#8162)

* fix: support azure o3 model family for fake streaming workaround

* refactor: rename helper to is_o_series_model for clarity

* update function calling parameters for o3 models (#8178)

* refactor(o1_transformation.py): refactor o1 config to be o series config, expand o series model check to o3

ensures max_tokens is correctly translated for o3

* feat(openai/): refactor o1 files to be 'o_series' files

expands naming to cover o3

* fix(azure/chat/o1_handler.py): azure openai is an instance of openai - was causing resets

* test(test_azure_o_series.py): assert stream faked for azure o3 mini

Resolves https://github.com/BerriAI/litellm/pull/8162

* fix(o1_transformation.py): fix o1 transformation logic to handle explicit o1_series routing

* docs(azure.md): update doc with `o_series/` model name

---------

Co-authored-by: byrongrogan <47910641+byrongrogan@users.noreply.github.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
2025-02-01 09:52:28 -08:00
Krish Dholakia
91ed05df29
Litellm dev contributor prs 01 31 2025 (#8168)
* Add O3-Mini for Azure and Remove Vision Support (#8161)

* Azure Released O3-mini at the same time as OAI, so i've added support here. Confirmed to work with Sweden Central.

* [FIX] replace cgi for python 3.13 with email.Message as suggested in PEP 594 (#8160)

* Update model_prices_and_context_window.json (#8120)

codestral2501 pricing on vertex_ai

* Fix/db view names (#8119)

* Fix to case sensitive DB Views name

* Fix to case sensitive DB View names

* Added quotes to check query as well

* Added quotes to create view query

* test: handle server error  for flaky test

vertex ai has unstable endpoints

---------

Co-authored-by: Wanis Elabbar <70503629+elabbarw@users.noreply.github.com>
Co-authored-by: Honghua Dong <dhh1995@163.com>
Co-authored-by: superpoussin22 <vincent.nadal@orange.fr>
Co-authored-by: Miguel Armenta <37154380+ma-armenta@users.noreply.github.com>
2025-02-01 09:05:20 -08:00
Krish Dholakia
8d0db8b379
build(schema.prisma): add new sso_user_id to LiteLLM_UserTable (#8167)
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable

easier way to store sso id for existing user

Allows existing user added to team, to login via SSO

* test(test_auth_checks.py): add unit testing for fuzzy user object get

* fix(handle_jwt.py): fix merge conflicts
2025-01-31 23:04:05 -08:00
Krish Dholakia
2147cad307
Litellm dev 01 31 2025 p2 (#8164)
* docs(token_auth.md): clarify title

* refactor(handle_jwt.py): add jwt auth manager + refactor to handle groups

allows user to call model if user belongs to group with model access

* refactor(handle_jwt.py): refactor to first check if service call then check user call

* feat(handle_jwt.py): new `enforce_team_access` param

only allows user to call model if a team they belong to has model access

allows controlling user model access by team

* fix(handle_jwt.py): fix error string, remove unecessary param

* docs(token_auth.md): add controlling model access for jwt tokens via teams to docs

* test: fix tests post refactor

* fix: fix linting errors

* fix: fix linting error

* test: fix import error
2025-01-31 22:52:35 -08:00
Ishaan Jaff
9ff27809b2
(Feat) add bedrock/deepseek custom import models (#8132)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 16s
* add support for using llama spec with bedrock

* fix get_bedrock_invoke_provider

* add support for using bedrock provider in mappings

* working request

* test_bedrock_custom_deepseek

* test_bedrock_custom_deepseek

* fix _get_model_id_for_llama_like_model

* test_bedrock_custom_deepseek

* doc DeepSeek-R1-Distill-Llama-70B

* test_bedrock_custom_deepseek
2025-01-31 18:40:44 -08:00
Ishaan Jaff
2cf0daa31c
(Fixes) OpenAI Streaming Token Counting + Fixes usage track when litellm.turn_off_message_logging=True (#8156)
* working streaming usage tracking

* fix test_async_chat_openai_stream_options

* fix await asyncio.sleep(1)

* test_async_chat_azure

* fix s3 logging

* fix get_stream_options

* fix get_stream_options

* fix streaming handler

* test_stream_token_counting_with_redaction

* fix codeql concern
2025-01-31 15:06:37 -08:00
Krish Dholakia
de261e2120
Doc updates + management endpoint fixes (#8138)
* Litellm dev 01 29 2025 p4 (#8107)

* fix(key_management_endpoints.py): always get db team

Fixes https://github.com/BerriAI/litellm/issues/7983

* test(test_key_management.py): add unit test enforcing check_db_only is always true on key generate checks

* test: fix test

* test: skip gemini thinking

* Litellm dev 01 29 2025 p3 (#8106)

* fix(__init__.py): reduces size of __init__.py and reduces scope for errors by using correct param

* refactor(__init__.py): refactor init by cleaning up redundant params

* refactor(__init__.py): move more constants into constants.py

cleanup root

* refactor(__init__.py): more cleanup

* feat(__init__.py): expose new 'disable_hf_tokenizer_download' param

enables hf model usage in offline env

* docs(config_settings.md): document new disable_hf_tokenizer_download param

* fix: fix linting error

* fix: fix unsafe comparison

* test: fix test

* docs(public_teams.md): add doc showing how to expose public teams for users to join

* docs: add beta disclaimer on public teams

* test: update tests
2025-01-30 22:56:41 -08:00
Krish Dholakia
69a6da4727
Litellm dev 01 30 2025 p2 (#8134)
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >

makes it consistent

* test(test_custom_guardrails.py): add more unit testing on default on guardrails

ensure it runs if user sent guardrail list is empty

* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails

* refactor(litellm_logging.py): refactor no-log to helper util

allows for more consistent behavior

* feat(litellm_logging.py): add event hook to verbose logs

* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected

* docs(logging.md): document how to disable 'no-log' param

* test: fix test to handle feb

* test: cleanup old bedrock model

* fix: fix router check
2025-01-30 22:18:53 -08:00
Ishaan Jaff
da3ddd2282 fix test_generate_and_update_key 2025-01-30 21:34:33 -08:00
Ishaan Jaff
4005a51db2
(UI) fix adding Vertex Models (#8129)
* fix handleSubmit

* update handleAddModelSubmit

* add jest testing for ui

* add step for running ui unit tests

* add validate json step to add model

* ui jest testing fixes

* update package lock

* ci/cd run again

* fix antd import

* run jest tests first

* fix antd install

* fix ui unit tests

* fix unit test ui
2025-01-30 21:11:08 -08:00
Ishaan Jaff
8a235e7d38
(Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112)
* LoggingCallbackManager

* add logging_callback_manager

* use logging_callback_manager

* add add_litellm_failure_callback

* use add_litellm_callback

* use add_litellm_async_success_callback

* add_litellm_async_failure_callback

* linting fix

* fix logging callback manager

* test_duplicate_multiple_loggers_test

* use _reset_all_callbacks

* fix testing with dup callbacks

* test_basic_image_generation

* reset callbacks for tests

* fix check for _add_custom_logger_to_list

* fix test_amazing_sync_embedding

* fix _get_custom_logger_key

* fix batches testing

* fix _reset_all_callbacks

* fix _check_callback_list_size

* add callback_manager_test

* fix test gemini-2.0-flash-thinking-exp-01-21
2025-01-30 19:35:50 -08:00
Ishaan Jaff
89d0d893fd fix test gemini-2.0-flash-thinking-exp-01-21 2025-01-30 14:05:59 -08:00
Krish Dholakia
ba8ba9eddb
feat(databricks/chat/transformation.py): add tools and 'tool_choice' param support (#8076)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 38s
* feat(databricks/chat/transformation.py): add tools and 'tool_choice' param support

Closes https://github.com/BerriAI/litellm/issues/7788

* refactor: cleanup redundant file

* test: mark flaky test

* test: mark all parallel request tests as flaky
2025-01-29 21:09:07 -08:00
Ishaan Jaff
12a84897cf run ci/cd again 2025-01-29 20:55:49 -08:00
Krish Dholakia
dad24f2b52
Litellm dev 01 29 2025 p2 (#8102)
* docs: cleanup doc

* feat(bedrock/): initial commit adding bedrock/converse_like/<model> route support

allows routing to a converse like endpoint

Resolves https://github.com/BerriAI/litellm/issues/8085

* feat(bedrock/chat/converse_transformation.py): make converse config base config compatible

enables new 'converse_like' route

* feat(converse_transformation.py): enables using the proxy with converse like api endpoint

Resolves https://github.com/BerriAI/litellm/issues/8085
2025-01-29 20:53:37 -08:00
Ishaan Jaff
31e967cbbd test_generate_and_update_key 2025-01-29 18:48:34 -08:00
Ishaan Jaff
33470e15b4 ci/cd run again 2025-01-29 17:57:32 -08:00
Ishaan Jaff
b6d61ec22b
(Feat) pass through vertex - allow using credentials defined on litellm router for vertex pass through (#8100)
* test_add_vertex_pass_through_deployment

* VertexPassThroughRouter

* fix use_in_pass_through

* VertexPassThroughRouter

* fix vertex_credentials

* allow using _initialize_deployment_for_pass_through

* test_add_vertex_pass_through_deployment

* _set_default_vertex_config

* fix verbose_proxy_logger

* fix use_in_pass_through

* fix _get_token_and_url

* test_get_vertex_location_from_url

* test_get_vertex_credentials_none

* run pt unit testing again

* fix add_vertex_credentials

* test_adding_deployments.py

* rename file
2025-01-29 17:54:02 -08:00
Ishaan Jaff
46b44f3a7f ci/cd run again
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 37s
2025-01-28 22:18:03 -08:00
Ishaan Jaff
5a7dc11432 test fix test_async_create_batch use only openai for testing, hitting azure limits 2025-01-28 22:17:49 -08:00
Ishaan Jaff
b812286534
(fix) - proxy reliability, ensure duplicate callbacks are not added to proxy (#8067)
* refactor _add_callbacks_from_db_config

* fix check for _custom_logger_exists_in_litellm_callbacks

* move loc of test utils

* run ci/cd again

* test_add_custom_logger_callback_to_specific_event_with_duplicates_callbacks

* fix _custom_logger_class_exists_in_success_callbacks

* unit testing for test_add_callbacks_from_db_config

* test_custom_logger_exists_in_callbacks_individual_functions

* fix config.yml

* fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
2025-01-28 21:01:56 -08:00
Ishaan Jaff
ae7b042bc2
(beta ui - spend logs view fixes & Improvements 1) (#8062)
* ui 1 - show correct msg on no logs

* fix dup country col

* backend - allow filtering by team_id and api_key

* fix ui_view_spend_logs

* ui update query params

* working team id and key hash filters

* fix filter ref - don't hold on them as they are

* fix _model_custom_llm_provider_matches_wildcard_pattern

* fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
2025-01-28 20:34:22 -08:00
Krish Dholakia
d9eb8f42ff
Litellm dev 01 27 2025 p3 (#8047)
* docs(reliability.md): add doc on disabling fallbacks per request

* feat(litellm_pre_call_utils.py): support reading request timeout from request headers - new `x-litellm-timeout` param

Allows setting dynamic model timeouts from vercel's AI sdk

* test(test_proxy_server.py): add simple unit test for reading request timeout

* test(test_fallbacks.py): add e2e test to confirm timeout passed in request headers is correctly read

* feat(main.py): support passing metadata to openai in preview

Resolves https://github.com/BerriAI/litellm/issues/6022#issuecomment-2616119371

* fix(main.py): fix passing openai metadata

* docs(request_headers.md): document new request headers

* build: Merge branch 'main' into litellm_dev_01_27_2025_p3

* test: loosen test
2025-01-28 18:01:27 -08:00
Krish Dholakia
9c20c69915
Fix bedrock model pricing + add unit test using bedrock pricing api (#7978)
* test(test_completion_cost.py): add unit testing to ensure all bedrock models with region name have cost tracked

* feat: initial script to get bedrock pricing from amazon api

ensures bedrock pricing is accurate

* build(model_prices_and_context_window.json): correct bedrock model prices based on api check

ensures accurate bedrock pricing

* ci(config.yml): add bedrock pricing check to ci/cd

ensures litellm always maintains up-to-date pricing for bedrock models

* ci(config.yml): add beautiful soup to ci/cd

* test: bump groq model

* test: fix test
2025-01-28 17:57:49 -08:00
Krish Dholakia
8eaa5dc797
Bedrock document processing fixes (#8005)
* refactor(factory.py): refactor async bedrock message transformation to use async get request for image url conversion

improve latency of bedrock call

* test(test_bedrock_completion.py): add unit testing to ensure async image url get called for async bedrock call

* refactor(factory.py): refactor bedrock translation to use BedrockImageProcessor

reduces duplicate code

* fix(factory.py): fix bug not allowing pdf's to be processed

* fix(factory.py): fix bedrock converse document understanding with image url

* docs(bedrock.md): clarify all bedrock document types are supported

* refactor: cleanup redundant test + unused imports

* perf: improve perf with reusable clients

* test: fix test
2025-01-28 17:48:32 -08:00
Krish Dholakia
c2e3986bbc
fix(utils.py): handle failed hf tokenizer request during calls (#8032)
* fix(utils.py): handle failed hf tokenizer request during calls

prevents proxy from failing due to bad hf tokenizer calls

* fix(utils.py): convert failure callback str to custom logger class

Fixes https://github.com/BerriAI/litellm/issues/8013

* test(test_utils.py): fix test - avoid adding mlflow dep on ci/cd

* fix: add missing env vars to test

* test: cleanup redundant test
2025-01-28 17:20:36 -08:00
Ishaan Jaff
74e332bfdd fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-28 16:28:24 -08:00
Krish Dholakia
2eaa0079f2
feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… (#8037)
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt auth

allows admin to define user role field and allowed roles which map to 'internal_user' on litellm

* fix(auth_checks.py): ensure user allowed to access model, when calling via personal keys

Fixes https://github.com/BerriAI/litellm/issues/8029

* feat(handle_jwt.py): support role based access with model permission control on proxy

Allows admin to just grant users roles on IDP (e.g. Azure AD/Keycloak) and user can immediately start calling models

* docs(rbac): add docs on rbac for model access control

make it clear how admin can use roles to control model access on proxy

* fix: fix linting errors

* test(test_user_api_key_auth.py): add unit testing to ensure rbac role is correctly enforced

* test(test_user_api_key_auth.py): add more testing

* test(test_users.py): add unit testing to ensure user model access is always checked for new keys

Resolves https://github.com/BerriAI/litellm/issues/8029

* test: fix unit test

* fix(dot_notation_indexing.py): fix typing to work with python 3.8
2025-01-28 16:27:06 -08:00
Ishaan Jaff
9644e197f7 deepseek api testing - deepseek is currently hanging
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-27 22:04:36 -08:00
Ishaan Jaff
46469c6087 set timeout for deepseek testing 2025-01-27 21:25:28 -08:00
Steve Farthing
fe0f9213af Bing Search Pass Thru 2025-01-27 08:58:04 -05:00
Krish Dholakia
6bafdbc546
Litellm dev 01 25 2025 p4 (#8006)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
* feat(main.py): use asyncio.sleep for mock_Timeout=true on async request

adds unit testing to ensure proxy does not fail if specific Openai requests hang (e.g. recent o1 outage)

* fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming

Fixes https://github.com/BerriAI/litellm/issues/7942

* Revert "fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming"

This reverts commit 7a052a64e3.

* fix(deepseek-r-1): return reasoning_content as a top-level param

ensures compatibility with existing tools that use it

* fix: fix linting error
2025-01-26 08:01:05 -08:00
Krish Dholakia
03eef5a2a0
Fix custom pricing - separate provider info from model info (#7990)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
* fix(utils.py): initial commit fixing custom cost tracking

refactors out provider specific model info from `get_model_info` - this was causing custom costs to be registered incorrectly

* fix(utils.py): cleanup `_supports_factory` to check provider info, if model info is None

some providers support features like vision across all models

* fix(utils.py): refactor to use _supports_factory

* test: update testing

* fix: fix linting errors

* test: fix testing
2025-01-25 21:49:28 -08:00
Ishaan Jaff
dcc3bbc264
(Fix) langfuse - setting LANGFUSE_FLUSH_INTERVAL (#8007)
* fix langfuse flush interval

* test_get_langfuse_flush_interval

* test_get_langfuse_flush_interval
2025-01-25 17:17:32 -08:00
Ishaan Jaff
d19614b8c0
(QA / testing) - Add e2e tests for key model access auth checks (#8000)
* fix _model_matches_any_wildcard_pattern_in_list

* test key model access checks

* add key_model_access_denied to ProxyErrorTypes

* update auth checks

* test_model_access_update

* test_team_model_access_patterns

* fix _team_model_access_check

* fix config used for otel testing

* test fix test_call_with_invalid_model

* fix model acces check tests

* test_team_access_groups

* test _model_matches_any_wildcard_pattern_in_list
2025-01-25 17:15:11 -08:00
Krish Dholakia
08b124aeb6
Litellm dev 01 25 2025 p2 (#8003)
* fix(base_utils.py): supported nested json schema passed in for anthropic calls

* refactor(base_utils.py): refactor ref parsing to prevent infinite loop

* test(test_openai_endpoints.py): refactor anthropic test to use bedrock

* fix(langfuse_prompt_management.py): add unit test for sync langfuse calls

Resolves https://github.com/BerriAI/litellm/issues/7938#issuecomment-2613293757
2025-01-25 16:50:57 -08:00
Ishaan Jaff
a7b3c664d1
(Feat) set guardrails per team (#7993)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 35s
* _add_guardrails_from_key_or_team_metadata

* e2e test test_guardrails_with_team_controls

* add try/except on team new

* test_guardrails_with_team_controls

* test_guardrails_with_api_key_controls
2025-01-25 10:41:11 -08:00