Commit graph

14814 commits

Author SHA1 Message Date
Daniel Liden
b789440854
Update streaming_logging.md
updates cost tracking code

- replace `completion_response` with `response_obj`
- add `import logging`
2024-07-15 10:21:55 -05:00
Krrish Dholakia
2bf1f06a0e bump: version 1.41.21 → 1.41.22 2024-07-14 08:06:53 -07:00
Krrish Dholakia
82ca7af6df fix(vertex_httpx.py): google search grounding fix 2024-07-14 08:06:17 -07:00
Krrish Dholakia
385da04d72 docs(vertex.md): add reference vertex ai grounding doc 2024-07-13 22:04:38 -07:00
Krrish Dholakia
fe6ea9f892 docs(user_keys.md): add openai js example to docs 2024-07-13 22:00:53 -07:00
Krish Dholakia
6bf60d773e
Merge pull request #4696 from BerriAI/litellm_guardrail_logging_only
Allow setting `logging_only` in guardrails config
2024-07-13 21:50:43 -07:00
Krish Dholakia
1e2e67c3fe
Merge pull request #4702 from BerriAI/litellm_add_azure_ai_pricing
add azure ai pricing + token info (mistral/jamba instruct/llama3)
2024-07-13 21:50:31 -07:00
Krish Dholakia
7bc9a189e7
Merge branch 'main' into litellm_add_azure_ai_pricing 2024-07-13 21:50:26 -07:00
Krrish Dholakia
9cca25f874 test(test_end_users.py): fix test 2024-07-13 21:46:19 -07:00
Krrish Dholakia
d475311eb3 test(test_presidio_pii_masking.py): fix presidio test 2024-07-13 21:44:22 -07:00
Krish Dholakia
d0fb685c56
Merge pull request #4706 from BerriAI/litellm_retry_after
Return `retry-after` header for rate limited requests
2024-07-13 21:37:41 -07:00
Krrish Dholakia
de8230ed41 fix(proxy_server.py): fix returning response headers on exception 2024-07-13 19:11:30 -07:00
Ishaan Jaff
4d7d6504b6
Merge pull request #4704 from BerriAI/litellm_debug_mem
[Debug-Utils] Add some useful memory usage debugging utils
2024-07-13 18:44:40 -07:00
Ishaan Jaff
ed5114c680
Merge pull request #4703 from BerriAI/litellm_only_use_internal_use_cache
[Fix Memory Usage] - only use per request tracking if slack alerting is being used
2024-07-13 18:40:22 -07:00
Ishaan Jaff
785081422c
Merge pull request #4705 from BerriAI/litellm_return_internal_cache_usage
feat mem utils debugging return size of in memory cache
2024-07-13 18:24:53 -07:00
Ishaan Jaff
31783196c0 feat - return size of in memory cache 2024-07-13 18:22:44 -07:00
Ishaan Jaff
759e02bdaa debug mem issues show growth 2024-07-13 18:05:19 -07:00
Ishaan Jaff
69f74c1e6c fix only use per request tracking if slack alerting is being used 2024-07-13 18:01:53 -07:00
Krrish Dholakia
fde434be66 feat(proxy_server.py): return 'retry-after' param for rate limited requests
Closes https://github.com/BerriAI/litellm/issues/4695
2024-07-13 17:15:20 -07:00
Krrish Dholakia
bc9fe23ebf fix: cleanup 2024-07-13 16:36:04 -07:00
Krrish Dholakia
b1be355d42 build(model_prices_and_context_window.json): add azure ai jamba instruct pricing + token details
Adds jamba instruct, mistral, llama3 pricing + token info for azure_ai
2024-07-13 16:34:31 -07:00
Ishaan Jaff
5c6e24370e bump: version 1.41.20 → 1.41.21 2024-07-13 16:05:52 -07:00
Krish Dholakia
bc58e44d8f
Merge pull request #4701 from BerriAI/litellm_rpm_support_passthrough
Support key-rpm limits on pass-through endpoints
2024-07-13 15:22:29 -07:00
Krrish Dholakia
a6deb9c350 docs(pass_through.md): update doc to specify key rpm limits will be enforced 2024-07-13 15:10:13 -07:00
Ishaan Jaff
1206b0b6a9
Merge pull request #4693 from BerriAI/litellm_bad_req_error_mapping
fix -  Raise `BadRequestError` when passing the wrong role
2024-07-13 15:05:54 -07:00
Krrish Dholakia
da4bd47e3e test: test fixes 2024-07-13 15:04:13 -07:00
Krrish Dholakia
77325358b4 fix(pass_through_endpoints.py): fix client init 2024-07-13 14:46:56 -07:00
Ishaan Jaff
c1a9881d5c
Merge pull request #4697 from BerriAI/litellm_fix_sso_bug
[Fix] Bug - Clear user_id from cache when /user/update is called
2024-07-13 14:39:47 -07:00
Krrish Dholakia
7e769f3b89 fix: fix linting errors 2024-07-13 14:39:42 -07:00
Ishaan Jaff
fad37a969b ui new build 2024-07-13 14:38:13 -07:00
Ishaan Jaff
b0a1ed72b1
Merge pull request #4692 from BerriAI/ui_fix_cache_ratio_calc
[UI] Fix Cache Ratio Calc
2024-07-13 14:36:39 -07:00
Krrish Dholakia
55e153556a test(test_pass_through_endpoints.py): add test for rpm limit support 2024-07-13 13:49:20 -07:00
Krrish Dholakia
0cc273d77b feat(pass_through_endpoint.py): support enforcing key rpm limits on pass through endpoints
Closes https://github.com/BerriAI/litellm/issues/4698
2024-07-13 13:29:44 -07:00
Ishaan Jaff
bba748eaf4 fix test rules 2024-07-13 13:23:23 -07:00
Ishaan Jaff
a447e4dd1a delete updated / deleted values from cache 2024-07-13 13:16:57 -07:00
Ishaan Jaff
56b69eba18 test updating user role 2024-07-13 13:13:40 -07:00
Krrish Dholakia
f1fe229bb1 docs(guardrails.md): update guardrail api spec 2024-07-13 12:34:49 -07:00
Ishaan Jaff
893ed4e5f1 correctly clear cache when updating a user 2024-07-13 12:33:43 -07:00
Ishaan Jaff
bc91025307 use wrapper on /user endpoints 2024-07-13 12:29:15 -07:00
Ishaan Jaff
677db38f8b add doc string to explain what delete cache does 2024-07-13 12:25:31 -07:00
Krrish Dholakia
6b78e39600 feat(guardrails.py): allow setting logging_only in guardrails_config for presidio pii masking integration 2024-07-13 12:22:17 -07:00
Ishaan Jaff
670bf1b98d correctly flush cache when updating user 2024-07-13 12:05:09 -07:00
Krrish Dholakia
f2522867ed fix(types/guardrails.py): add 'logging_only' param support 2024-07-13 11:44:37 -07:00
Krrish Dholakia
caa01d20cb build: re-run ci/cd 2024-07-13 11:41:35 -07:00
Ishaan Jaff
bcc89a2c3a fix testing exception mapping 2024-07-13 11:10:13 -07:00
Krrish Dholakia
9d02d51a17 docs(pass_through.md): cleanup docs 2024-07-13 09:56:06 -07:00
Ishaan Jaff
d0dbc0742b fix exception raised in factory.py 2024-07-13 09:55:04 -07:00
Ishaan Jaff
c7f74b0297 test - test_completion_bedrock_invalid_role_exception 2024-07-13 09:54:32 -07:00
Ishaan Jaff
23cccba070 fix str from BadRequestError 2024-07-13 09:54:32 -07:00
Ishaan Jaff
03933de775 fix exception raised in factory.py 2024-07-13 09:54:32 -07:00