Ishaan Jaff
e0dbd328be
test_bedrock_nova_json.py
Read Version from pyproject.toml / read-version (push) Successful in 17s
Helm unit test / unit-test (push) Successful in 23s
2025-04-03 08:37:59 -07:00
Ishaan Jaff
afcd00bdc0
test_redis_caching_llm_caching_ttl
2025-04-02 21:54:35 -07:00
Ishaan Jaff
dd2d1dc2f4
Merge branch 'main' into litellm_metrics_pod_lock_manager
2025-04-02 21:35:55 -07:00
Ishaan Jaff
e68603e176
test create and update gauge
2025-04-02 21:31:19 -07:00
Krish Dholakia
8ee32291e0
Squashed commit of the following: ( #9709 )
...
commit b12a9892b7
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Wed Apr 2 08:09:56 2025 -0700
fix(utils.py): don't modify openai_token_counter
commit 294de31803
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 21:22:40 2025 -0700
fix: fix linting error
commit cb6e9fbe40
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 19:52:45 2025 -0700
refactor: complete migration
commit bfc159172d
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 19:09:59 2025 -0700
refactor: refactor more constants
commit 43ffb6a558
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 18:45:24 2025 -0700
fix: test
commit 04dbe4310c
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 18:28:58 2025 -0700
refactor: refactor: move more constants into constants.py
commit 3c26284aff
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 18:14:46 2025 -0700
refactor: migrate hardcoded constants out of __init__.py
commit c11e0de69d
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 18:11:21 2025 -0700
build: migrate all constants into constants.py
commit 7882bdc787
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date: Mon Mar 24 18:07:37 2025 -0700
build: initial test banning hardcoded numbers in repo
2025-04-02 21:24:54 -07:00
Ishaan Jaff
0155b9f212
Merge branch 'main' into litellm_refactor_pod_lock_manager
2025-04-02 21:05:18 -07:00
Ishaan Jaff
5222cce510
Merge branch 'main' into litellm_metrics_pod_lock_manager
2025-04-02 21:04:44 -07:00
Ishaan Jaff
20d84ddef1
get_base_image_generation_call_args
2025-04-02 21:04:06 -07:00
Ishaan Jaff
acf920a41a
Merge branch 'main' into litellm_fix_azure_o_series
2025-04-02 20:58:52 -07:00
Ishaan Jaff
c3341a1e18
test fixes - azure deprecated dall-e-2
2025-04-02 20:56:20 -07:00
Ishaan Jaff
74550df197
get_base_image_generation_call_args
2025-04-02 20:52:16 -07:00
Ishaan Jaff
4ed0ab5b1c
Revert "remove google dns for img tests"
...
This reverts commit d3fc8b563c
.
2025-04-02 20:42:29 -07:00
Ishaan Jaff
d3fc8b563c
remove google dns for img tests
2025-04-02 20:34:47 -07:00
Ishaan Jaff
8405fcb748
test pod lock manager
2025-04-02 15:06:31 -07:00
Ishaan Jaff
a64631edfb
test pod lock manager
2025-04-02 14:39:40 -07:00
Ishaan Jaff
d4a20d4fb8
test azure o series
2025-04-02 09:46:45 -07:00
Ishaan Jaff
83e4c34e0a
test fix get_base_completion_call_args
2025-04-02 09:18:56 -07:00
Krish Dholakia
053b0e741f
Add Google AI Studio /v1/files
upload API support ( #9645 )
...
Read Version from pyproject.toml / read-version (push) Successful in 16s
Helm unit test / unit-test (push) Successful in 23s
* test: fix import for test
* fix: fix bad error string
* docs: cleanup files docs
* fix(files/main.py): cleanup error string
* style: initial commit with a provider/config pattern for files api
google ai studio files api onboarding
* fix: test
* feat(gemini/files/transformation.py): support gemini files api response transformation
* fix(gemini/files/transformation.py): return file id as gemini uri
allows id to be passed in to chat completion request, just like openai
* feat(llm_http_handler.py): support async route for files api on llm_http_handler
* fix: fix linting errors
* fix: fix model info check
* fix: fix ruff errors
* fix: fix linting errors
* Revert "fix: fix linting errors"
This reverts commit 926a5a527f
.
* fix: fix linting errors
* test: fix test
* test: fix tests
2025-04-02 08:56:58 -07:00
Pranav Simha
2e35f07e94
Add support for max_completion_tokens to the Cohere chat transformation config ( #9701 )
2025-04-02 07:50:44 -07:00
Ishaan Jaff
443b8ab93a
test_azure_o1_series_response_format_extra_params
2025-04-02 07:01:08 -07:00
Ishaan Jaff
8f372ea243
test_completion_invalid_param_cohere
2025-04-02 06:49:11 -07:00
Krish Dholakia
6c69ad4c89
fix(model_management_endpoints.py): fix allowing team admins to update team models ( #9697 )
...
Read Version from pyproject.toml / read-version (push) Successful in 17s
Helm unit test / unit-test (push) Successful in 22s
* fix(model_management_endpoints.py): fix allowing team admins to update their models
* test(test_models.py): add e2e test to for team model flow
ensure team admin can always add / edit / delete team models
2025-04-01 22:28:15 -07:00
Krish Dholakia
23051d89dd
fix(streaming_handler.py): fix completion start time tracking ( #9688 )
...
* fix(streaming_handler.py): fix completion start time tracking
Fixes https://github.com/BerriAI/litellm/issues/9210
* feat(anthropic/chat/transformation.py): map openai 'reasoning_effort' to anthropic 'thinking' param
Fixes https://github.com/BerriAI/litellm/issues/9022
* feat: map 'reasoning_effort' to 'thinking' param across bedrock + vertex
Closes https://github.com/BerriAI/litellm/issues/9022#issuecomment-2705260808
2025-04-01 22:00:56 -07:00
Ishaan Jaff
63dd2934b7
test_supports_tool_choice
2025-04-01 21:43:46 -07:00
Ishaan Jaff
4b99f833bb
test_cohere_request_body_with_allowed_params
2025-04-01 21:30:24 -07:00
Ishaan Jaff
f7129e5e59
fix _apply_openai_param_overrides
2025-04-01 21:17:59 -07:00
Ishaan Jaff
c454dbec30
get_supported_openai_params for o-1 series models
2025-04-01 19:03:50 -07:00
Ishaan Jaff
feba274a89
test DailySpendUpdateQueue
2025-04-01 18:39:23 -07:00
Ishaan Jaff
4a091a34b0
move test loc
2025-04-01 18:33:33 -07:00
Ishaan Jaff
8dc792139e
refactor file structure
2025-04-01 18:30:48 -07:00
Ishaan Jaff
290e837515
test_update_logs_with_spend_logs_url
2025-04-01 18:15:01 -07:00
Ishaan Jaff
4ddca7a79c
Merge branch 'main' into litellm_fix_service_account_behavior
2025-04-01 12:04:28 -07:00
Ishaan Jaff
61b609f320
Merge pull request #9673 from BerriAI/litellm_qa_deadlock_fixes
...
[Reliability] - Ensure new Redis + DB architecture tracks spend accurately
2025-04-01 12:04:03 -07:00
Ishaan Jaff
c2c5dbf24f
test_get_enforced_params
2025-04-01 08:41:53 -07:00
Ishaan Jaff
f805e15f7b
test_get_enforced_params_for_service_account_settings
2025-04-01 08:39:41 -07:00
Ishaan Jaff
e5f6529c42
test_get_enforced_params_for_service_account_settings
2025-04-01 07:46:38 -07:00
Ishaan Jaff
13aa7f75f6
test_enforced_params_check
2025-04-01 07:40:31 -07:00
Ishaan Jaff
55763ae276
test_end_user_transactions_reset
2025-04-01 07:13:25 -07:00
Ishaan Jaff
7a2442d6c0
test_batch_update_spend
2025-04-01 07:12:29 -07:00
Krish Dholakia
62ad84fb64
UI (new_usage.tsx): Report 'total_tokens' + report success/failure calls ( #9675 )
...
* feat(internal_user_endpoints.py): return 'total_tokens' in `/user/daily/analytics`
* test(test_internal_user_endpoints.py): add unit test to assert spend metrics and dailyspend metadata always report the same fields
* build(schema.prisma): record success + failure calls to daily user table
allows understanding why model requests might exceed provider requests (e.g. user hit rate limit error)
* fix(internal_user_endpoints.py): report success / failure requests in API
* fix(proxy/utils.py): default to success
status can be missing or none at times for successful requests
* feat(new_usage.tsx): show success/failure calls on UI
* style(new_usage.tsx): ui cleanup
* fix: fix linting error
* fix: fix linting error
* feat(litellm-proxy-extras/): add new migration files
2025-03-31 22:48:43 -07:00
Krish Dholakia
f2a7edaddc
fix(proxy_server.py): Fix "Circular reference detected" error when max_parallel_requests = 0 ( #9671 )
...
* fix(proxy_server.py): remove non-functional parent backoff/retry on /chat/completion
Causes circular reference error
* fix(http_parsing_utils.py): safely return parsed body - don't allow mutation of cached request body by client functions
Root cause fix for circular reference error
* Revert "fix: Anthropic prompt caching on GCP Vertex AI (#9605 )" (#9670 )
This reverts commit a8673246dc
.
* add type hints for AnthropicMessagesResponse
* define types for response form AnthropicMessagesResponse
* fix response typing
* allow using litellm.messages.acreate and litellm.messages.create
* fix anthropic_messages implementation
* add clear type hints to litellm.messages.create functions
* fix anthropic_messages
* working anthropic API tests
* fixes - anthropic messages interface
* use new anthropic interface
* fix code quality check
* docs anthropic messages endpoint
* add namespace_packages = True to mypy
* fix mypy lint errors
* docs anthropic messages interface
* test: fix unit test
* test(test_http_parsing_utils.py): update tests
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2025-03-31 22:06:02 -07:00
Krish Dholakia
722f3ff0e6
fix(cost_calculator.py): allows checking received + sent model name when checking for cost calculation ( #9669 )
...
Fixes issue introduced by dfb838eaff (r154667517)
2025-03-31 21:29:48 -07:00
Ishaan Jaff
115946d402
unit testing for SpendUpdateQueue
2025-03-31 21:25:24 -07:00
Krish Dholakia
5ad2fbcba6
Openrouter streaming fixes + Anthropic 'file' message support ( #9667 )
...
* fix(openrouter/transformation.py): Handle error in openrouter stream
Fixes https://github.com/Aider-AI/aider/issues/3550
* test(test_openrouter_chat_transformation.py): add unit tests
* feat(anthropic/chat/transformation.py): add openai 'file' message content type support
Closes https://github.com/BerriAI/litellm/issues/9463
* fix(factory.py): add bedrock converse support for openai 'file' message content type
Closes https://github.com/BerriAI/litellm/issues/9463
2025-03-31 21:22:59 -07:00
Ishaan Jaff
9951b356da
test_long_term_spend_accuracy_with_bursts
2025-03-31 21:09:29 -07:00
Ishaan Jaff
923ac2303b
test_end_user_transactions_reset
2025-03-31 20:55:13 -07:00
Ishaan Jaff
bc5cc51b9d
Merge pull request #9567 from BerriAI/litellm_anthropic_messages_improvements
...
[Refactor] - Expose litellm.messages.acreate() and litellm.messages.create() to make LLM API calls in Anthropic API spec
2025-03-31 20:50:30 -07:00
Ishaan Jaff
271b8b95bc
test spend accuracy
2025-03-31 19:35:07 -07:00
Ishaan Jaff
aa8261af89
test fixes
2025-03-31 19:33:10 -07:00
Ishaan Jaff
a753fc9d9f
test_long_term_spend_accuracy_with_bursts
2025-03-31 19:17:13 -07:00