Commit graph

1687 commits

Author SHA1 Message Date
Ishaan Jaff
c0f3100934
[Bug Fix] - /vertex_ai/ was not detected as llm_api_route on pass through but vertex-ai was (#8186)
* fix mapped_pass_through_routes

* fix route checks

* update test_is_llm_api_route
2025-02-01 17:26:08 -08:00
Krish Dholakia
9e65f867ab
test: add more unit testing for team member endpoints (#8170)
* test: add more unit testing for team member add

* fix(team_endpoints.py): add validation check to prevent same user from being added to team again

prevents duplicates

* fix(team_endpoints.py): raise error if `/team/member_delete` called on member that's not in team

prevent being able to call delete on same member multiple times

* test: update initial tests

* test: fix test

* test: update test to handle no member duplication
2025-02-01 11:23:00 -08:00
Krish Dholakia
23f458d2da
Improved O3 + Azure O3 support (#8181)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* fix: support azure o3 model family for fake streaming workaround (#8162)

* fix: support azure o3 model family for fake streaming workaround

* refactor: rename helper to is_o_series_model for clarity

* update function calling parameters for o3 models (#8178)

* refactor(o1_transformation.py): refactor o1 config to be o series config, expand o series model check to o3

ensures max_tokens is correctly translated for o3

* feat(openai/): refactor o1 files to be 'o_series' files

expands naming to cover o3

* fix(azure/chat/o1_handler.py): azure openai is an instance of openai - was causing resets

* test(test_azure_o_series.py): assert stream faked for azure o3 mini

Resolves https://github.com/BerriAI/litellm/pull/8162

* fix(o1_transformation.py): fix o1 transformation logic to handle explicit o1_series routing

* docs(azure.md): update doc with `o_series/` model name

---------

Co-authored-by: byrongrogan <47910641+byrongrogan@users.noreply.github.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
2025-02-01 09:52:28 -08:00
Krish Dholakia
91ed05df29
Litellm dev contributor prs 01 31 2025 (#8168)
* Add O3-Mini for Azure and Remove Vision Support (#8161)

* Azure Released O3-mini at the same time as OAI, so i've added support here. Confirmed to work with Sweden Central.

* [FIX] replace cgi for python 3.13 with email.Message as suggested in PEP 594 (#8160)

* Update model_prices_and_context_window.json (#8120)

codestral2501 pricing on vertex_ai

* Fix/db view names (#8119)

* Fix to case sensitive DB Views name

* Fix to case sensitive DB View names

* Added quotes to check query as well

* Added quotes to create view query

* test: handle server error  for flaky test

vertex ai has unstable endpoints

---------

Co-authored-by: Wanis Elabbar <70503629+elabbarw@users.noreply.github.com>
Co-authored-by: Honghua Dong <dhh1995@163.com>
Co-authored-by: superpoussin22 <vincent.nadal@orange.fr>
Co-authored-by: Miguel Armenta <37154380+ma-armenta@users.noreply.github.com>
2025-02-01 09:05:20 -08:00
Krish Dholakia
8d0db8b379
build(schema.prisma): add new sso_user_id to LiteLLM_UserTable (#8167)
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable

easier way to store sso id for existing user

Allows existing user added to team, to login via SSO

* test(test_auth_checks.py): add unit testing for fuzzy user object get

* fix(handle_jwt.py): fix merge conflicts
2025-01-31 23:04:05 -08:00
Krish Dholakia
2147cad307
Litellm dev 01 31 2025 p2 (#8164)
* docs(token_auth.md): clarify title

* refactor(handle_jwt.py): add jwt auth manager + refactor to handle groups

allows user to call model if user belongs to group with model access

* refactor(handle_jwt.py): refactor to first check if service call then check user call

* feat(handle_jwt.py): new `enforce_team_access` param

only allows user to call model if a team they belong to has model access

allows controlling user model access by team

* fix(handle_jwt.py): fix error string, remove unecessary param

* docs(token_auth.md): add controlling model access for jwt tokens via teams to docs

* test: fix tests post refactor

* fix: fix linting errors

* fix: fix linting error

* test: fix import error
2025-01-31 22:52:35 -08:00
Ishaan Jaff
9ff27809b2
(Feat) add bedrock/deepseek custom import models (#8132)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 16s
* add support for using llama spec with bedrock

* fix get_bedrock_invoke_provider

* add support for using bedrock provider in mappings

* working request

* test_bedrock_custom_deepseek

* test_bedrock_custom_deepseek

* fix _get_model_id_for_llama_like_model

* test_bedrock_custom_deepseek

* doc DeepSeek-R1-Distill-Llama-70B

* test_bedrock_custom_deepseek
2025-01-31 18:40:44 -08:00
Ishaan Jaff
2cf0daa31c
(Fixes) OpenAI Streaming Token Counting + Fixes usage track when litellm.turn_off_message_logging=True (#8156)
* working streaming usage tracking

* fix test_async_chat_openai_stream_options

* fix await asyncio.sleep(1)

* test_async_chat_azure

* fix s3 logging

* fix get_stream_options

* fix get_stream_options

* fix streaming handler

* test_stream_token_counting_with_redaction

* fix codeql concern
2025-01-31 15:06:37 -08:00
Krish Dholakia
de261e2120
Doc updates + management endpoint fixes (#8138)
* Litellm dev 01 29 2025 p4 (#8107)

* fix(key_management_endpoints.py): always get db team

Fixes https://github.com/BerriAI/litellm/issues/7983

* test(test_key_management.py): add unit test enforcing check_db_only is always true on key generate checks

* test: fix test

* test: skip gemini thinking

* Litellm dev 01 29 2025 p3 (#8106)

* fix(__init__.py): reduces size of __init__.py and reduces scope for errors by using correct param

* refactor(__init__.py): refactor init by cleaning up redundant params

* refactor(__init__.py): move more constants into constants.py

cleanup root

* refactor(__init__.py): more cleanup

* feat(__init__.py): expose new 'disable_hf_tokenizer_download' param

enables hf model usage in offline env

* docs(config_settings.md): document new disable_hf_tokenizer_download param

* fix: fix linting error

* fix: fix unsafe comparison

* test: fix test

* docs(public_teams.md): add doc showing how to expose public teams for users to join

* docs: add beta disclaimer on public teams

* test: update tests
2025-01-30 22:56:41 -08:00
Krish Dholakia
69a6da4727
Litellm dev 01 30 2025 p2 (#8134)
* feat(lowest_tpm_rpm_v2.py): fix redis cache check to use >= instead of >

makes it consistent

* test(test_custom_guardrails.py): add more unit testing on default on guardrails

ensure it runs if user sent guardrail list is empty

* docs(quick_start.md): clarify default on guardrails run even if user guardrails list contains other guardrails

* refactor(litellm_logging.py): refactor no-log to helper util

allows for more consistent behavior

* feat(litellm_logging.py): add event hook to verbose logs

* fix(litellm_logging.py): add unit testing to ensure `litellm.disable_no_log_param` is respected

* docs(logging.md): document how to disable 'no-log' param

* test: fix test to handle feb

* test: cleanup old bedrock model

* fix: fix router check
2025-01-30 22:18:53 -08:00
Ishaan Jaff
da3ddd2282 fix test_generate_and_update_key 2025-01-30 21:34:33 -08:00
Ishaan Jaff
4005a51db2
(UI) fix adding Vertex Models (#8129)
* fix handleSubmit

* update handleAddModelSubmit

* add jest testing for ui

* add step for running ui unit tests

* add validate json step to add model

* ui jest testing fixes

* update package lock

* ci/cd run again

* fix antd import

* run jest tests first

* fix antd install

* fix ui unit tests

* fix unit test ui
2025-01-30 21:11:08 -08:00
Ishaan Jaff
8a235e7d38
(Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112)
* LoggingCallbackManager

* add logging_callback_manager

* use logging_callback_manager

* add add_litellm_failure_callback

* use add_litellm_callback

* use add_litellm_async_success_callback

* add_litellm_async_failure_callback

* linting fix

* fix logging callback manager

* test_duplicate_multiple_loggers_test

* use _reset_all_callbacks

* fix testing with dup callbacks

* test_basic_image_generation

* reset callbacks for tests

* fix check for _add_custom_logger_to_list

* fix test_amazing_sync_embedding

* fix _get_custom_logger_key

* fix batches testing

* fix _reset_all_callbacks

* fix _check_callback_list_size

* add callback_manager_test

* fix test gemini-2.0-flash-thinking-exp-01-21
2025-01-30 19:35:50 -08:00
Ishaan Jaff
89d0d893fd fix test gemini-2.0-flash-thinking-exp-01-21 2025-01-30 14:05:59 -08:00
Krish Dholakia
ba8ba9eddb
feat(databricks/chat/transformation.py): add tools and 'tool_choice' param support (#8076)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 38s
* feat(databricks/chat/transformation.py): add tools and 'tool_choice' param support

Closes https://github.com/BerriAI/litellm/issues/7788

* refactor: cleanup redundant file

* test: mark flaky test

* test: mark all parallel request tests as flaky
2025-01-29 21:09:07 -08:00
Ishaan Jaff
12a84897cf run ci/cd again 2025-01-29 20:55:49 -08:00
Krish Dholakia
dad24f2b52
Litellm dev 01 29 2025 p2 (#8102)
* docs: cleanup doc

* feat(bedrock/): initial commit adding bedrock/converse_like/<model> route support

allows routing to a converse like endpoint

Resolves https://github.com/BerriAI/litellm/issues/8085

* feat(bedrock/chat/converse_transformation.py): make converse config base config compatible

enables new 'converse_like' route

* feat(converse_transformation.py): enables using the proxy with converse like api endpoint

Resolves https://github.com/BerriAI/litellm/issues/8085
2025-01-29 20:53:37 -08:00
Ishaan Jaff
31e967cbbd test_generate_and_update_key 2025-01-29 18:48:34 -08:00
Ishaan Jaff
33470e15b4 ci/cd run again 2025-01-29 17:57:32 -08:00
Ishaan Jaff
b6d61ec22b
(Feat) pass through vertex - allow using credentials defined on litellm router for vertex pass through (#8100)
* test_add_vertex_pass_through_deployment

* VertexPassThroughRouter

* fix use_in_pass_through

* VertexPassThroughRouter

* fix vertex_credentials

* allow using _initialize_deployment_for_pass_through

* test_add_vertex_pass_through_deployment

* _set_default_vertex_config

* fix verbose_proxy_logger

* fix use_in_pass_through

* fix _get_token_and_url

* test_get_vertex_location_from_url

* test_get_vertex_credentials_none

* run pt unit testing again

* fix add_vertex_credentials

* test_adding_deployments.py

* rename file
2025-01-29 17:54:02 -08:00
Ishaan Jaff
46b44f3a7f ci/cd run again
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 37s
2025-01-28 22:18:03 -08:00
Ishaan Jaff
5a7dc11432 test fix test_async_create_batch use only openai for testing, hitting azure limits 2025-01-28 22:17:49 -08:00
Ishaan Jaff
b812286534
(fix) - proxy reliability, ensure duplicate callbacks are not added to proxy (#8067)
* refactor _add_callbacks_from_db_config

* fix check for _custom_logger_exists_in_litellm_callbacks

* move loc of test utils

* run ci/cd again

* test_add_custom_logger_callback_to_specific_event_with_duplicates_callbacks

* fix _custom_logger_class_exists_in_success_callbacks

* unit testing for test_add_callbacks_from_db_config

* test_custom_logger_exists_in_callbacks_individual_functions

* fix config.yml

* fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
2025-01-28 21:01:56 -08:00
Ishaan Jaff
ae7b042bc2
(beta ui - spend logs view fixes & Improvements 1) (#8062)
* ui 1 - show correct msg on no logs

* fix dup country col

* backend - allow filtering by team_id and api_key

* fix ui_view_spend_logs

* ui update query params

* working team id and key hash filters

* fix filter ref - don't hold on them as they are

* fix _model_custom_llm_provider_matches_wildcard_pattern

* fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
2025-01-28 20:34:22 -08:00
Krish Dholakia
d9eb8f42ff
Litellm dev 01 27 2025 p3 (#8047)
* docs(reliability.md): add doc on disabling fallbacks per request

* feat(litellm_pre_call_utils.py): support reading request timeout from request headers - new `x-litellm-timeout` param

Allows setting dynamic model timeouts from vercel's AI sdk

* test(test_proxy_server.py): add simple unit test for reading request timeout

* test(test_fallbacks.py): add e2e test to confirm timeout passed in request headers is correctly read

* feat(main.py): support passing metadata to openai in preview

Resolves https://github.com/BerriAI/litellm/issues/6022#issuecomment-2616119371

* fix(main.py): fix passing openai metadata

* docs(request_headers.md): document new request headers

* build: Merge branch 'main' into litellm_dev_01_27_2025_p3

* test: loosen test
2025-01-28 18:01:27 -08:00
Krish Dholakia
9c20c69915
Fix bedrock model pricing + add unit test using bedrock pricing api (#7978)
* test(test_completion_cost.py): add unit testing to ensure all bedrock models with region name have cost tracked

* feat: initial script to get bedrock pricing from amazon api

ensures bedrock pricing is accurate

* build(model_prices_and_context_window.json): correct bedrock model prices based on api check

ensures accurate bedrock pricing

* ci(config.yml): add bedrock pricing check to ci/cd

ensures litellm always maintains up-to-date pricing for bedrock models

* ci(config.yml): add beautiful soup to ci/cd

* test: bump groq model

* test: fix test
2025-01-28 17:57:49 -08:00
Krish Dholakia
8eaa5dc797
Bedrock document processing fixes (#8005)
* refactor(factory.py): refactor async bedrock message transformation to use async get request for image url conversion

improve latency of bedrock call

* test(test_bedrock_completion.py): add unit testing to ensure async image url get called for async bedrock call

* refactor(factory.py): refactor bedrock translation to use BedrockImageProcessor

reduces duplicate code

* fix(factory.py): fix bug not allowing pdf's to be processed

* fix(factory.py): fix bedrock converse document understanding with image url

* docs(bedrock.md): clarify all bedrock document types are supported

* refactor: cleanup redundant test + unused imports

* perf: improve perf with reusable clients

* test: fix test
2025-01-28 17:48:32 -08:00
Krish Dholakia
c2e3986bbc
fix(utils.py): handle failed hf tokenizer request during calls (#8032)
* fix(utils.py): handle failed hf tokenizer request during calls

prevents proxy from failing due to bad hf tokenizer calls

* fix(utils.py): convert failure callback str to custom logger class

Fixes https://github.com/BerriAI/litellm/issues/8013

* test(test_utils.py): fix test - avoid adding mlflow dep on ci/cd

* fix: add missing env vars to test

* test: cleanup redundant test
2025-01-28 17:20:36 -08:00
Ishaan Jaff
74e332bfdd fix test test_stream_chunk_builder_openai_audio_output_usage - use direct dict comparison
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-28 16:28:24 -08:00
Krish Dholakia
2eaa0079f2
feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… (#8037)
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt auth

allows admin to define user role field and allowed roles which map to 'internal_user' on litellm

* fix(auth_checks.py): ensure user allowed to access model, when calling via personal keys

Fixes https://github.com/BerriAI/litellm/issues/8029

* feat(handle_jwt.py): support role based access with model permission control on proxy

Allows admin to just grant users roles on IDP (e.g. Azure AD/Keycloak) and user can immediately start calling models

* docs(rbac): add docs on rbac for model access control

make it clear how admin can use roles to control model access on proxy

* fix: fix linting errors

* test(test_user_api_key_auth.py): add unit testing to ensure rbac role is correctly enforced

* test(test_user_api_key_auth.py): add more testing

* test(test_users.py): add unit testing to ensure user model access is always checked for new keys

Resolves https://github.com/BerriAI/litellm/issues/8029

* test: fix unit test

* fix(dot_notation_indexing.py): fix typing to work with python 3.8
2025-01-28 16:27:06 -08:00
Ishaan Jaff
9644e197f7 deepseek api testing - deepseek is currently hanging
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-27 22:04:36 -08:00
Ishaan Jaff
46469c6087 set timeout for deepseek testing 2025-01-27 21:25:28 -08:00
Steve Farthing
fe0f9213af Bing Search Pass Thru 2025-01-27 08:58:04 -05:00
Krish Dholakia
6bafdbc546
Litellm dev 01 25 2025 p4 (#8006)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
* feat(main.py): use asyncio.sleep for mock_Timeout=true on async request

adds unit testing to ensure proxy does not fail if specific Openai requests hang (e.g. recent o1 outage)

* fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming

Fixes https://github.com/BerriAI/litellm/issues/7942

* Revert "fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming"

This reverts commit 7a052a64e3.

* fix(deepseek-r-1): return reasoning_content as a top-level param

ensures compatibility with existing tools that use it

* fix: fix linting error
2025-01-26 08:01:05 -08:00
Krish Dholakia
03eef5a2a0
Fix custom pricing - separate provider info from model info (#7990)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
* fix(utils.py): initial commit fixing custom cost tracking

refactors out provider specific model info from `get_model_info` - this was causing custom costs to be registered incorrectly

* fix(utils.py): cleanup `_supports_factory` to check provider info, if model info is None

some providers support features like vision across all models

* fix(utils.py): refactor to use _supports_factory

* test: update testing

* fix: fix linting errors

* test: fix testing
2025-01-25 21:49:28 -08:00
Ishaan Jaff
dcc3bbc264
(Fix) langfuse - setting LANGFUSE_FLUSH_INTERVAL (#8007)
* fix langfuse flush interval

* test_get_langfuse_flush_interval

* test_get_langfuse_flush_interval
2025-01-25 17:17:32 -08:00
Ishaan Jaff
d19614b8c0
(QA / testing) - Add e2e tests for key model access auth checks (#8000)
* fix _model_matches_any_wildcard_pattern_in_list

* test key model access checks

* add key_model_access_denied to ProxyErrorTypes

* update auth checks

* test_model_access_update

* test_team_model_access_patterns

* fix _team_model_access_check

* fix config used for otel testing

* test fix test_call_with_invalid_model

* fix model acces check tests

* test_team_access_groups

* test _model_matches_any_wildcard_pattern_in_list
2025-01-25 17:15:11 -08:00
Krish Dholakia
08b124aeb6
Litellm dev 01 25 2025 p2 (#8003)
* fix(base_utils.py): supported nested json schema passed in for anthropic calls

* refactor(base_utils.py): refactor ref parsing to prevent infinite loop

* test(test_openai_endpoints.py): refactor anthropic test to use bedrock

* fix(langfuse_prompt_management.py): add unit test for sync langfuse calls

Resolves https://github.com/BerriAI/litellm/issues/7938#issuecomment-2613293757
2025-01-25 16:50:57 -08:00
Ishaan Jaff
a7b3c664d1
(Feat) set guardrails per team (#7993)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 35s
* _add_guardrails_from_key_or_team_metadata

* e2e test test_guardrails_with_team_controls

* add try/except on team new

* test_guardrails_with_team_controls

* test_guardrails_with_api_key_controls
2025-01-25 10:41:11 -08:00
Ishaan Jaff
669b4fc955
(Prometheus) - emit key budget metrics on startup (#8002)
* add UI_SESSION_TOKEN_TEAM_ID

* add type KeyListResponseObject

* add _list_key_helper

* _initialize_api_key_budget_metrics

* key / budget metrics

* init key budget metrics on startup

* test_initialize_api_key_budget_metrics

* fix linting

* test_list_key_helper

* test_initialize_remaining_budget_metrics_exception_handling
2025-01-25 10:37:52 -08:00
Ishaan Jaff
d9dcfccdf6
(QA / testing) - Add unit testing for key model access checks (#7999)
* fix _model_matches_any_wildcard_pattern_in_list

* fix docstring
2025-01-25 10:01:35 -08:00
Ishaan Jaff
f77882948d test_init_custom_logger_compatible_class_as_callback 2025-01-24 21:27:22 -08:00
Krish Dholakia
8ca3229b26
Ensure base_model cost tracking works across all endpoints (#7989)
* test(test_completion_cost.py): add sdk test to ensure base model is used for cost tracking

* test(test_completion_cost.py): add sdk test to ensure custom pricing works

* fix(main.py): add base model cost tracking support for embedding calls

Enables base model cost tracking for embedding calls when base model set as a litellm_param

* fix(litellm_logging.py): update logging object with litellm params - including base model, if given

ensures base model param is always tracked

* fix(main.py): fix linting errors
2025-01-24 21:05:26 -08:00
Krish Dholakia
9df6bd90ba
fix(spend_tracking_utils.py): revert api key pass through fix (#7977)
* fix(spend_tracking_utils.py): revert api key pass through fix

* fix: fix linting error

* fix(spend_tracking_utils.py): add noqa - refactor post fixing standard logging payload on pass-through endpoints

* test(test_groq.py): bump groq model

* fix: fix positioning of noqa
2025-01-24 21:04:36 -08:00
Ishaan Jaff
74caef0843
(Feat) - Add GCS Pub/Sub Logging integration for sending DB SpendLogs to BigQuery (#7976)
* add pub_sub

* fix custom batch logger for GCS PUB/SUB

* GCS_PUBSUB_PROJECT_ID

* e2e gcs pub sub

* add gcs pub sub

* fix logging

* add GcsPubSubLogger

* fix pub sub

* add pub sub

* docs gcs pub / sub

* docs on pub sub controls

* test_gcs_pub_sub

* fix publish_message

* test_async_gcs_pub_sub

* test_async_gcs_pub_sub
2025-01-24 20:57:20 -08:00
Ishaan Jaff
bf46ae7346
(Testing) e2e testing for team budget enforcement checks (#7988)
* test_team_and_key_budget_enforcement

* test_team_budget_update

* test_gemini_pro_json_schema_httpx_content_policy_error
2025-01-24 18:18:12 -08:00
Ishaan Jaff
2017596913 Revert "test_team_and_key_budget_enforcement"
This reverts commit 9d44f51847.
2025-01-24 15:32:41 -08:00
Ishaan Jaff
9d44f51847 test_team_and_key_budget_enforcement 2025-01-24 15:31:48 -08:00
Ishaan Jaff
ed283bc5b4
(Feat) - allow setting default_on guardrails (#7973)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* test_default_on_guardrail

* update debug on custom guardrail

* refactor guardrails init

* guardrail registry

* allow switching guardrails default_on

* fix circle import issue

* fix bedrock applying guardrails where content is a list

* fix unused import

* docs default on guardrail

* docs fix per api key
2025-01-24 10:14:05 -08:00
Krish Dholakia
1e011b66d3
Ollama ssl verify = False + Spend Logs reliability fixes (#7931)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* fix(http_handler.py): support passing ssl verify dynamically and using the correct httpx client based on passed ssl verify param

Fixes https://github.com/BerriAI/litellm/issues/6499

* feat(llm_http_handler.py): support passing `ssl_verify=False` dynamically in call args

Closes https://github.com/BerriAI/litellm/issues/6499

* fix(proxy/utils.py): prevent bad logs from breaking all cost tracking + reset list regardless of success/failure

prevents malformed logs from causing all spend tracking to break since they're constantly retried

* test(test_proxy_utils.py): add test to ensure bad log is dropped

* test(test_proxy_utils.py): ensure in-memory spend logs reset after bad log error

* test(test_user_api_key_auth.py): add unit test to ensure end user id as str works

* fix(auth_utils.py): ensure extracted end user id is always a str

prevents db cost tracking errors

* test(test_auth_utils.py): ensure get end user id from request body always returns a string

* test: update tests

* test: skip bedrock test- behaviour now supported

* test: fix testing

* refactor(spend_tracking_utils.py): reduce size of get_logging_payload

* test: fix test

* bump: version 1.59.4 → 1.59.5

* Revert "bump: version 1.59.4 → 1.59.5"

This reverts commit 1182b46b2e.

* fix(utils.py): fix spend logs retry logic

* fix(spend_tracking_utils.py): fix get tags

* fix(spend_tracking_utils.py): fix end user id spend tracking on pass-through endpoints
2025-01-23 23:05:41 -08:00