Commit graph

416 commits

Author SHA1 Message Date
Ishaan Jaff
f6fa2399cc
(Router) - If allowed_fails or allowed_fail_policy set, use that for single deployment cooldown logic (#8668)
* fix cooldown 1 deployment

* test_single_deployment_cooldown_with_allowed_fail_policy

* fix docstring test

* test_single_deployment_no_cooldowns

* ci/cd run again

* test router cooldowns
2025-02-25 15:15:01 -08:00
Krrish Dholakia
b7ec53aec1 test: handle index error 2025-02-24 22:11:08 -08:00
Krish Dholakia
142b195784
Add anthropic thinking + reasoning content support (#8778)
* feat(anthropic/chat/transformation.py): add anthropic thinking param support

* feat(anthropic/chat/transformation.py): support returning thinking content for anthropic on streaming responses

* feat(anthropic/chat/transformation.py): return list of thinking blocks (include block signature)

allows usage in tool call responses

* fix(types/utils.py): extract and map reasoning_content from anthropic as content str

* test: add testing to ensure thinking_blocks are returned at the root

* fix(anthropic/chat/handler.py): return thinking blocks on streaming - include signature

* feat(factory.py): handle anthropic thinking blocks translation if in assistant response

* test: handle openai internal instability

* test: handle openai audio instability

* ci: pin anthropic dep

* test: handle openai audio instability

* fix: fix linting error

* refactor(anthropic/chat/transformation.py): refactor function to remain <50 LOC

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error
2025-02-24 21:54:30 -08:00
Krish Dholakia
9914c166b7
Litellm contributor prs 02 24 2025 (#8775)
* Adding VertexAI Claude 3.7 Sonnet (#8774)

Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>

* build(model_prices_and_context_window.json): add anthropic 3-7 models on vertex ai and bedrock

* Support video_url (#8743)

* Support video_url

Support VLMs that works with video.
Example implemenation in vllm: https://github.com/vllm-project/vllm/pull/10020

* llms openai.py: Add ChatCompletionVideoObject

Add data structures to support `video_url` in chat completion

* test test_completion.py: add test for video_url

* Arize Phoenix - ensure correct endpoint/protocol are used; and default to phoenix cloud (#8750)

* minor fixes to default to http and to ensure that the correct endpoint is used

* Update test_arize_phoenix.py

* prioritize http over grpc

---------

Co-authored-by: Emerson Gomes <emerson.gomes@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Pang Wu <104795337+pang-wu@users.noreply.github.com>
Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-24 18:55:48 -08:00
Dragos Campean
2905ad98b3
Add anthropic3-7-sonnet (#8766)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
2025-02-24 12:59:00 -08:00
Krish Dholakia
09462ba80c
Add cohere v2/rerank support (#8421) (#8605)
* Add cohere v2/rerank support (#8421)

* Support v2 endpoint cohere rerank

* Add tests and docs

* Make v1 default if old params used

* Update docs

* Update docs pt 2

* Update tests

* Add e2e test

* Clean up code

* Use inheritence for new config

* Fix linting issues (#8608)

* Fix cohere v2 failing test + linting (#8672)

* Fix test and unused imports

* Fix tests

* fix: fix linting errors

* test: handle tgai instability

* fix: skip service unavailable err

* test: print logs for unstable test

* test: skip unreliable tests

---------

Co-authored-by: vibhavbhat <vibhavb00@gmail.com>
2025-02-22 22:25:29 -08:00
Krish Dholakia
21ea52105a
Support arize phoenix on litellm proxy (#7756) (#8715)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* Update opentelemetry.py

wip

* Update test_opentelemetry_unit_tests.py

* fix a few paths and tests

* fix path

* Update litellm_logging.py

* accidentally removed code

* Add type for protocol

* Add and update tests

* minor changes

* update and add additional arize phoenix test

* update existing test

* address feedback

* use standard_logging_object

* address feedback

Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-22 20:55:11 -08:00
Krish Dholakia
251467a525
add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684)
* build(model_prices_and_context_window.json): mark bedrock llama as supporting vision based on docs

* Add price for Cerebras llama3.3-70b (#8676)

* docs(readme.md): fix contributing docs

point people to new mock directory testing structure s/o @vibhavbhat

* build: update contributing readme

* docs(readme.md): improve docs

* docs(readme.md): cleanup readme on tests/

* docs(README.md): cleanup doc

* feat(infinity/): support returning documents when return_documents=True

* test(test_rerank.py): add e2e testing for cohere rerank

* fix: fix linting errors

* fix(together_ai/): fix together ai transformation

* fix: fix linting error

* fix: fix linting errors

* fix: fix linting errors

* test: mark cohere as flaky

* build: fix model supports check

* test: fix test

* test: mark flaky test

* fix: fix test

* test: fix test

---------

Co-authored-by: Yury Koleda <fut.wrk@gmail.com>
2025-02-20 21:23:54 -08:00
Krish Dholakia
b682dc4ec8
Add cost tracking for rerank via bedrock (#8691)
* feat(bedrock/rerank): infer model region if model given as arn

* test: add unit testing to ensure bedrock region name inferred from arn on rerank

* feat(bedrock/rerank/transformation.py): include search units for bedrock rerank result

Resolves https://github.com/BerriAI/litellm/issues/7258#issuecomment-2671557137

* test(test_bedrock_completion.py): add testing for bedrock cohere rerank

* feat(cost_calculator.py): refactor rerank cost tracking to support bedrock cost tracking

* build(model_prices_and_context_window.json): add amazon.rerank model to model cost map

* fix(cost_calculator.py): bedrock/common_utils.py

get base model from model w/ arn -> handles rerank model

* build(model_prices_and_context_window.json): add bedrock cohere rerank pricing

* feat(bedrock/rerank): migrate bedrock config to basererank config

* Revert "feat(bedrock/rerank): migrate bedrock config to basererank config"

This reverts commit 84fae1f167.

* test: add testing to ensure large doc / queries are correctly counted

* Revert "test: add testing to ensure large doc / queries are correctly counted"

This reverts commit 4337f1657e.

* fix(migrate-jina-ai-to-rerank-config): enables cost tracking

* refactor(jina_ai/): finish migrating jina ai to base rerank config

enables cost tracking

* fix(jina_ai/rerank): e2e jina ai rerank cost tracking

* fix: cleanup dead code

* fix: fix python3.8 compatibility error

* test: fix test

* test: add e2e testing for azure ai rerank

* fix: fix linting error

* test: mark cohere as flaky
2025-02-20 21:00:18 -08:00
Krish Dholakia
f9df01fbc6
fix(utils.py): handle token counter error when invalid message passed in (#8670)
* fix(utils.py): handle token counter error

* fix(utils.py): testing fixes

* fix(utils.py): fix incr for num tokens from list

* fix(utils.py): fix text str token counting
2025-02-19 22:21:34 -08:00
Krish Dholakia
cc77138b37
Add all /key/generate api params to UI + add metadata fields on team AND org add/update (#8667)
* feat(create_key_button.tsx): initial commit using openapi.json to ensure all values via api are supported on ui for `/key/generate`

Closes https://github.com/BerriAI/litellm/issues/7763

* style(create_key_button.tsx): put openapi settings inside 'advanced setting' accordion

* fix(check_openapi_schema.tsx): style improvements for advanced settings

* style(create_key_button.tsx): add tooltip explaining what the settings mean

* fix(team_info.tsx): render metadata field on team update

allow updating a team's metadata

* fix(networking.tsx): add 'metadata' field to create team form

* refactor: cleanup dead codeblock

* fix(organization_endpoints.py): fix metadata param support on `/organization/new`

* feat(organization_endpoints.py): support updating metadata for organization on api + ui

* test: mark flaky test
2025-02-19 21:13:06 -08:00
Ishaan Jaff
045cf3f9e2
(Bug Fix Redis) - Fix running redis.mget operations with None Keys (#8666)
* async_batch_get_cache

* test_batch_get_cache_with_none_keys

* async_batch_get_cache

* fix linting error
2025-02-19 19:56:57 -08:00
Ishaan Jaff
b88762b63c
(Polish/Fixes) - Fixes for Adding Team Specific Models (#8645)
* refactor get model info for team models

* allow adding a model to a team when creating team specific model

* ui update selected Team on Team Dropdown

* test_team_model_association

* testing for team specific models

* test_get_team_specific_model

* test: skip on internal server error

* remove model alias card on teams page

* linting fix _get_team_specific_model

* fix DeploymentTypedDict

* fix linting error

* fix code quality

* fix model info checks

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-02-18 21:11:57 -08:00
Krish Dholakia
2b7755f8d8
Litellm dev 02 18 2025 p3 (#8640)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* fix(team_endpoints.py): cleanup user <-> team association on team delete

Fixes issue where user table still listed team membership post delete

* test(test_team.py): update e2e test - ensure user/team membership is deleted on team delete

* fix(base_invoke_transformation.py): fix deepseek r1 transformation

remove deepseek name from model url

* test(test_completion.py): assert model route not in url

* feat(base_invoke_transformation.py): infer region name from model arn

prevent errors due to different region name in env var vs. model arn, respect if explicitly set in call though

* test: fix test

* test: skip on internal server error
2025-02-18 19:14:20 -08:00
Krish Dholakia
ef87b0eaae
Litellm dev 02 18 2025 p2 (#8639)
* fix(parallel_request_limiter.py): improve single instance rate limiting by updating in-memory cache instantly

Fixes issue where parallel request limiter had a leak

* fix(parallel_request_limiter.py): fix parallel request limiter to not decrement val on max limit being reached

* test(test_parallel_request_limiter.py): fix test

* test: fix test

* fix(parallel_request_limiter.py): move to using common enum

* test: fix test
2025-02-18 19:12:16 -08:00
Ishaan Jaff
d1ba04d9d9
[Feature]: Redis Caching - Allow setting a namespace for redis cache (#8624)
* use _add_namespace_to_cache_key

* fix cache_control_args

* test_redis_caching_multiple_namespaces

* test_add_namespace_to_cache_key

* test_redis_caching_multiple_namespaces

* docs redis name space

* test_add_namespace_to_cache_key
2025-02-18 14:47:34 -08:00
Krish Dholakia
ea985dda0b
fix(model_cost_map): fix json parse error on model cost map + add unit test (#8629)
Fixes https://github.com/BerriAI/litellm/pull/8619#issuecomment-2666693045
2025-02-18 11:18:16 -08:00
Krish Dholakia
2340f1b31f
Pass router tags in request headers - x-litellm-tags (#8609)
* feat(litellm_pre_call_utils.py): support `x-litellm-tags` request header

allow tag based routing + spend tracking via request headers

* docs(request_headers.md): document new `x-litellm-tags` for tag based routing and spend tracking

* docs(tag_routing.md): add to docs

* fix(utils.py): only pass str values for openai metadata param

* fix(utils.py): drop non-str values for metadata param to openai

preview-feature, otel span was being sent in
2025-02-18 08:26:22 -08:00
Ishaan Jaff
8024300825
(UI) Improvements to Add Team Model Flow (#8603)
* ui - use common team dropdown component

* re-use team component

* rename org field on add model

* handle add model submit

* working view model_id and team_id on root models page

* cleaner

* show all fields

* working model info view

* working team info selector

* clean up team id

* new component for model dashboard

* ui show table with dropdown

* make public model names like email

* revert changes to litellm model name

* fix litellm model name

* ui fix public model

* fix mappings

* fix conditional text input

* fix message

* ui fix bulk add models

* _add_team_model_to_db

* move model mgmt helper funcs

* test_add_team_model_to_db

* ui - display model team model name

* fix add model tab

* fix remove redundant info tab on models page

* dont pass model mappings all the way through

* fix jarring model name when adding team models

* fix edit model button

* delete button on model info

* ui fix model dashboard

* fix DeploymentTypedDict

* _is_model_access_group_for_wildcard_route

* test _get_public_model_name

* ui fix viewing public model name

* fix linting error

* fix linting errors

* fix selectedModel logic
2025-02-17 18:37:14 -08:00
Ishaan Jaff
6b3bfa2b42
(Feat) - return x-litellm-attempted-fallbacks in responses from litellm proxy (#8558)
* add_fallback_headers_to_response

* test x-litellm-attempted-fallbacks

* unit test attempted fallbacks

* fix add_fallback_headers_to_response

* docs document response headers

* fix file name
2025-02-15 14:54:23 -08:00
Krish Dholakia
58141df65d
Litellm dev 02 13 2025 p2 (#8525)
* fix(azure/chat/gpt_transformation.py): add 'prediction' as a support azure param

Closes https://github.com/BerriAI/litellm/issues/8500

* build(model_prices_and_context_window.json): add new 'gemini-2.0-pro-exp-02-05' model

* style: cleanup invalid json trailing commma

* feat(utils.py): support passing 'tokenizer_config' to register_prompt_template

enables passing complete tokenizer config of model to litellm

 Allows calling deepseek on bedrock with the correct prompt template

* fix(utils.py): fix register_prompt_template for custom model names

* test(test_prompt_factory.py): fix test

* test(test_completion.py): add e2e test for bedrock invoke deepseek ft model

* feat(base_invoke_transformation.py): support hf_model_name param for bedrock invoke calls

enables proxy admin to set base model for ft bedrock deepseek model

* feat(bedrock/invoke): support deepseek_r1 route for bedrock

makes it easy to apply the right chat template to that call

* feat(constants.py): store deepseek r1 chat template - allow user to get correct response from deepseek r1 without extra work

* test(test_completion.py): add e2e mock test for bedrock deepseek

* docs(bedrock.md): document new deepseek_r1 route for bedrock

allows us to use the right config

* fix(exception_mapping_utils.py): catch read operation timeout
2025-02-13 20:28:42 -08:00
Krish Dholakia
f5841eb84d
fix(router.py): add more deployment timeout debug information for tim… (#8523)
* fix(router.py): add more deployment timeout debug information for timeout errors

help understand why some calls in high-traffic don't respect their model-specific timeouts

* test(test_convert_dict_to_response.py): unit test ensuring empty str is not converted to None

Addresses https://github.com/BerriAI/litellm/issues/8507

* fix(convert_dict_to_response.py): handle empty message str - don't return back as 'None'

Fixes https://github.com/BerriAI/litellm/issues/8507

* test(test_completion.py): add e2e test
2025-02-13 17:10:22 -08:00
Krrish Dholakia
2eee1cdd02 test: fix test 2025-02-12 22:44:41 -08:00
Krish Dholakia
305049a968
Litellm dev 02 12 2025 p1 (#8494)
* Resolves https://github.com/BerriAI/litellm/issues/6625 (#8459)

- enables no auth for SMTP

Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>

* add sonar pricings (#8476)

* add sonar pricings

* Update model_prices_and_context_window.json

* Update model_prices_and_context_window.json

* Update model_prices_and_context_window_backup.json

* test: fix test

---------

Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Co-authored-by: Dani Regli <1daniregli@gmail.com>
Co-authored-by: Lucca Zenóbio <luccazen@gmail.com>
2025-02-12 22:39:29 -08:00
Ishaan Jaff
54811cf595 test_async_router_context_window_fallback 2025-02-12 18:14:35 -08:00
Ishaan Jaff
40e3af0428
(Redis Cluster) - Fixes for using redis cluster + pipeline (#8442)
* update RedisCluster creation

* update RedisClusterCache

* add redis ClusterCache

* update async_set_cache_pipeline

* cleanup redis cluster usage

* fix redis pipeline

* test_init_async_client_returns_same_instance

* fix redis cluster

* update mypy_path

* fix init_redis_cluster

* remove stub

* test redis commit

* ClusterPipeline

* fix import

* RedisCluster import

* fix redis cluster

* Potential fix for code scanning alert no. 2129: Clear-text logging of sensitive information

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix naming of redis cluster integration

* test_redis_caching_ttl_pipeline

* fix async_set_cache_pipeline

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-02-12 18:01:32 -08:00
Ishaan Jaff
78f6fdcd92 test_async_router_context_window_fallback 2025-02-12 17:33:27 -08:00
Ishaan Jaff
faee508d1f fix test_async_router_context_window_fallback 2025-02-12 16:55:49 -08:00
Krrish Dholakia
b5ff15b3d3 test: skip redundant test 2025-02-10 22:13:58 -08:00
Krrish Dholakia
e9a861ec32 feat(guardrails.py): return specific litellm params in /guardrails/list endpoint
support returning mode, default_on and guardrail name on `/guardrails/list` endpoint
2025-02-10 22:13:58 -08:00
Ishaan Jaff
00c596a852
(Feat) - Allow viewing Request/Response Logs stored in GCS Bucket (#8449)
* BaseRequestResponseFetchFromCustomLogger

* get_active_base_request_response_fetch_from_custom_logger

* get_request_response_payload

* ui_view_request_response_for_request_id

* fix uiSpendLogDetailsCall

* fix get_request_response_payload

* ui fix RequestViewer

* use 1 class AdditionalLoggingUtils

* ui_view_request_response_for_request_id

* cache the prefetch logs details

* refactor prefetch

* test view request/resp logs

* fix code quality

* fix get_request_response_payload

* uninstall posthog
prevent it from being added in ci/cd

* fix posthog

* fix traceloop test

* fix linting error
2025-02-10 20:38:55 -08:00
Krish Dholakia
9c4c7813fb
Allow org admin to create teams on UI (#8407)
* fix(client_initialization_utils.py): handle custom llm provider set with valid value not from model name

* fix(handle_jwt.py): handle groups not existing in jwt token

if user not in group, this won't exist

* fix(handle_jwt.py): add new `enforce_team_based_model_access` flag to jwt auth

allows proxy admin to enforce user can only call model if team has access

* feat(navbar.tsx): expose new dropdown in navbar - allow org admin to create teams within org context

* fix(navbar.tsx): remove non-functional cogicon

* fix(proxy/utils.py): include user-org memberships in `/user/info` response

return orgs user is a member of and the user role within org

* feat(organization_endpoints.py): allow internal user to query `/organizations/list` and get all orgs they belong to

enables org admin to select org they belong to, to create teams

* fix(navbar.tsx): show change in ui when org switcher clicked

* feat(page.tsx): update user role based on org they're in

allows org admin to create teams in the org context

* feat(teams.tsx): working e2e flow for allowing org admin to add new teams

* style(navbar.tsx): clarify switching orgs on UI is in BETA

* fix(organization_endpoints.py): handle getting but not setting members

* test: fix test

* fix(client_initialization_utils.py): revert custom llm provider handling fix - causing unintended issues

* docs(token_auth.md): cleanup docs
2025-02-09 00:07:15 -08:00
Krish Dholakia
e4411e4815
Allow editing model api key + provider on UI (#8406)
* fix(parallel_request_limiter.py): add back parallel request information to max parallel request limiter

Resolves https://github.com/BerriAI/litellm/issues/8392

* test: mark flaky test to handle time based tracking issues

* feat(model_management_endpoints.py): expose new patch `/model/{model_id}/update` endpoint

Allows updating specific values of a model in db - makes it easy for admin to know this by calling it a PA
TCH

* feat(edit_model_modal.tsx): allow user to update llm provider + api key on the ui

* fix: fix linting error
2025-02-08 23:50:47 -08:00
Ishaan Jaff
b242c66a3b
(Feat) - Add /bedrock/invoke support for all Anthropic models (#8383)
* use anthropic transformation for bedrock/invoke

* use anthropic transforms for bedrock invoke claude

* TestBedrockInvokeClaudeJson

* add AmazonAnthropicClaudeStreamDecoder

* pass bedrock_invoke_provider to make_call

* fix _get_base_bedrock_model

* fix get_bedrock_route

* fix bedrock routing

* fixes for bedrock invoke

* test_all_model_configs

* fix AWSEventStreamDecoder linting

* fix code qa

* test_bedrock_get_base_model

* test_get_model_info_bedrock_models

* test_bedrock_base_model_helper

* test_bedrock_route_detection
2025-02-07 22:41:11 -08:00
Krish Dholakia
5d170162d3
fix(nvidia_nim/embed.py): add 'dimensions' support (#8302)
* fix(nvidia_nim/embed.py): add 'dimensions' support

Fixes https://github.com/BerriAI/litellm/issues/8238

* fix(proxy_Server.py): initialize router redis cache if setup on proxy

Fixes https://github.com/BerriAI/litellm/issues/6602

* test: add unit testing for new helper function
2025-02-07 16:19:32 -08:00
Krish Dholakia
f031926b82
fix(utils.py): handle key error in msg validation (#8325)
* fix(utils.py): handle key error in msg validation

* Support running Aim Guard during LLM call (#7918)

* support running Aim Guard during LLM call

* Rename header

* adjust docs and fix type annotations

* fix(timeout.md): doc fix for openai example on dynamic timeouts

---------

Co-authored-by: Tomer Bin <117278227+hxtomer@users.noreply.github.com>
2025-02-06 18:13:46 -08:00
Krish Dholakia
b4e5c0de69
Improve rpm check on keys (#8301)
* fix(parallel_request_limiter.py): initial commit that solves the rpm limit check on keys

Fixes https://github.com/BerriAI/litellm/issues/6938

* fix(parallel_request_limiter.py): simpler approach - just increment RPM in pre call hook instead of on success

* fix(parallel_request_limiter.py): pass testing

* fix: fix linting error

* fix(parallel_request_limiter.py): fix parallel request check for keys
2025-02-05 20:23:08 -08:00
Ishaan Jaff
818792228c
(Refactor) - migrate bedrock invoke to BaseLLMHTTPHandler class (#8290)
* initial transform for invoke

* invoke transform_response

* working - able to make request

* working get_complete_url

* working - invoke now runs on llm_http_handler

* fix unused imports

* track litellm overhead ms

* working stream request

* sign_request transform

* sign_request update

* use has_async_custom_stream_wrapper property

* use get_async_custom_stream_wrapper in base llm http handler

* fix make_call in invoke handler

* fix invoke with streaming get_async_custom_stream_wrapper

* working bedrock async streaming with invoke

* fix make call handler for bedrock

* test_all_model_configs

* fix test_bedrock_custom_prompt_template

* sync streaming for bedrock invoke

* fix _add_stream_param_to_request_body

* test_async_text_completion_bedrock

* fix transform_request

* fix get_supported_openai_params

* fix test supports tool choice

* fix test_supports_tool_choice

* add unit test coverage for bedrock invoke transform

* fix location of transformation files

* update import loc

* fix bedrock invoke unit tests

* fix import for max completion tokens
2025-02-05 18:58:55 -08:00
Ishaan Jaff
3a6349d871
(Feat) - Add support for structured output on bedrock/nova models + add util litellm.supports_tool_choice (#8264)
* fix supports_tool_choice

* TestBedrockNovaJson

* use supports_tool_choice

* fix supports_tool_choice

* add supports_tool_choice param

* script to add fields to model cost map

* test_supports_tool_choice

* test_supports_tool_choice

* fix supports tool choice check

* test_supports_tool_choice_simple_tests

* fix supports_tool_choice check

* fix supports_tool_choice bedrock

* test_supports_tool_choice

* test_supports_tool_choice

* fix bedrock/eu-west-3/mistral.mistral-large-2402-v1:0

* ci/cd run again

* test_supports_tool_choice_simple_tests

* TestGoogleAIStudioGemini temp - remove to run ci/cd

* test_aaalangfuse_logging_metadata

* TestGoogleAIStudioGemini

* test_check_provider_match

* remove add param to map
2025-02-04 21:47:16 -08:00
Ishaan Jaff
06a9829749 all e2e langfuse tests now run on test_langfuse_e2e_test.p 2025-02-04 21:22:51 -08:00
Krish Dholakia
c17342ac5b
fix(openai/): allows 'reasoning_effort' param to be passed correctly (#8227)
* fix(openai/): allows 'reasoning_effort' param to be passed correctly

Fixes https://github.com/BerriAI/litellm/issues/8217

* test: update test to handle gemini token counter change

* fix(factory.py): fix bedrock http:// handling

* test: fix test

* test: update testing for new openai sdk
2025-02-03 22:39:10 -08:00
Krrish Dholakia
7ddb034b31 test: update test to handle gemini token counter change 2025-02-03 18:12:53 -08:00
Krish Dholakia
c8494abdea
test(base_llm_unit_tests.py): add test to ensure drop params is respe… (#8224)
* test(base_llm_unit_tests.py): add test to ensure drop params is respected

* fix(types/prometheus.py): use typing_extensions for python3.8 compatibility

* build: add cherry picked commits
2025-02-03 16:04:44 -08:00
Krish Dholakia
97b8de17ab
LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 (#7828)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* fix(vertex_ai/gemini/transformation.py): handle 'http://' image urls

* test: add base test for `http:` url's

* fix(factory.py/get_image_details): follow redirects

allows http calls to work

* fix(codestral/): fix stream chunk parsing on last chunk of stream

* Azure ad token provider (#6917)

* Update azure.py

Added optional parameter azure ad token provider

* Added parameter to main.py

* Found token provider arg location

* Fixed embeddings

* Fixed ad token provider

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix: fix linting errors

* fix(main.py): leave out o1 route for azure ad token provider, for now

get v0 out for sync azure gpt route to begin with

* test: skip http:// test for fireworks ai

model does not support it

* refactor: cleanup dead code

* fix: revert http:// url passthrough for gemini

google ai studio raises errors

* test: fix test

---------

Co-authored-by: bahtman <anton@baht.dk>
2025-02-02 23:17:50 -08:00
Krish Dholakia
1105e35538
Complete o3 model support (#8183)
* fix(o_series_transformation.py): add 'reasoning_effort' as o series model param

Closes https://github.com/BerriAI/litellm/issues/8182

* fix(main.py): ensure `reasoning_effort` is a mapped openai param

* refactor(azure/): rename o1_[x] files to o_series_[x]

* refactor(base_llm_unit_tests.py): refactor testing for o series reasoning effort

* test(test_azure_o_series.py): have azure o series tests correctly inherit from base o series model tests

* feat(base_utils.py): support translating 'developer' role to 'system' role for non-openai providers

Makes it easy to switch from openai to anthropic

* fix: fix linting errors

* fix(base_llm_unit_tests.py): fix test

* fix(main.py): add missing param
2025-02-02 22:36:37 -08:00
Krish Dholakia
e4566d7b1c
fix(main.py): fix passing openrouter specific params (#8184)
* fix(main.py): fix passing openrouter specific params

Fixes https://github.com/BerriAI/litellm/issues/8130

* test(test_get_model_info.py): add check for region name w/ cris model

Resolves https://github.com/BerriAI/litellm/issues/8115
2025-02-02 22:23:14 -08:00
Krish Dholakia
91ed05df29
Litellm dev contributor prs 01 31 2025 (#8168)
* Add O3-Mini for Azure and Remove Vision Support (#8161)

* Azure Released O3-mini at the same time as OAI, so i've added support here. Confirmed to work with Sweden Central.

* [FIX] replace cgi for python 3.13 with email.Message as suggested in PEP 594 (#8160)

* Update model_prices_and_context_window.json (#8120)

codestral2501 pricing on vertex_ai

* Fix/db view names (#8119)

* Fix to case sensitive DB Views name

* Fix to case sensitive DB View names

* Added quotes to check query as well

* Added quotes to create view query

* test: handle server error  for flaky test

vertex ai has unstable endpoints

---------

Co-authored-by: Wanis Elabbar <70503629+elabbarw@users.noreply.github.com>
Co-authored-by: Honghua Dong <dhh1995@163.com>
Co-authored-by: superpoussin22 <vincent.nadal@orange.fr>
Co-authored-by: Miguel Armenta <37154380+ma-armenta@users.noreply.github.com>
2025-02-01 09:05:20 -08:00
Ishaan Jaff
2cf0daa31c
(Fixes) OpenAI Streaming Token Counting + Fixes usage track when litellm.turn_off_message_logging=True (#8156)
* working streaming usage tracking

* fix test_async_chat_openai_stream_options

* fix await asyncio.sleep(1)

* test_async_chat_azure

* fix s3 logging

* fix get_stream_options

* fix get_stream_options

* fix streaming handler

* test_stream_token_counting_with_redaction

* fix codeql concern
2025-01-31 15:06:37 -08:00
Krish Dholakia
de261e2120
Doc updates + management endpoint fixes (#8138)
* Litellm dev 01 29 2025 p4 (#8107)

* fix(key_management_endpoints.py): always get db team

Fixes https://github.com/BerriAI/litellm/issues/7983

* test(test_key_management.py): add unit test enforcing check_db_only is always true on key generate checks

* test: fix test

* test: skip gemini thinking

* Litellm dev 01 29 2025 p3 (#8106)

* fix(__init__.py): reduces size of __init__.py and reduces scope for errors by using correct param

* refactor(__init__.py): refactor init by cleaning up redundant params

* refactor(__init__.py): move more constants into constants.py

cleanup root

* refactor(__init__.py): more cleanup

* feat(__init__.py): expose new 'disable_hf_tokenizer_download' param

enables hf model usage in offline env

* docs(config_settings.md): document new disable_hf_tokenizer_download param

* fix: fix linting error

* fix: fix unsafe comparison

* test: fix test

* docs(public_teams.md): add doc showing how to expose public teams for users to join

* docs: add beta disclaimer on public teams

* test: update tests
2025-01-30 22:56:41 -08:00
Krish Dholakia
69a6da4727
Litellm dev 01 30 2025 p2 (#8134)
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >

makes it consistent

* test(test_custom_guardrails.py): add more unit testing on default on guardrails

ensure it runs if user sent guardrail list is empty

* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails

* refactor(litellm_logging.py): refactor no-log to helper util

allows for more consistent behavior

* feat(litellm_logging.py): add event hook to verbose logs

* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected

* docs(logging.md): document how to disable 'no-log' param

* test: fix test to handle feb

* test: cleanup old bedrock model

* fix: fix router check
2025-01-30 22:18:53 -08:00