Commit graph

9496 commits

Author SHA1 Message Date
Krrish Dholakia
9cc2daeec9 fix(utils.py): update get_model_info docstring
Fixes https://github.com/BerriAI/litellm/issues/4711
2024-07-15 18:18:50 -07:00
Ishaan Jaff
a900f352b5 fix - don't write file.filename 2024-07-15 14:56:01 -07:00
Krrish Dholakia
e8e31c4029 docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
Pamela Fox
0c98cc6a86 New line 2024-07-15 11:13:14 -07:00
Pamela Fox
d0fe1a8906 Docstring 2024-07-15 11:12:42 -07:00
Pamela Fox
8d01f91056 Less changes 2024-07-15 11:11:21 -07:00
Pamela Fox
a2188a869e Less changes 2024-07-15 11:09:45 -07:00
Pamela Fox
d43dbc756b Count tokens for tools 2024-07-15 11:07:52 -07:00
Ishaan Jaff
3dc2ec8119 fix show debugging utils on in mem usage 2024-07-15 10:05:57 -07:00
Krrish Dholakia
82ca7af6df fix(vertex_httpx.py): google search grounding fix 2024-07-14 08:06:17 -07:00
Krish Dholakia
6bf60d773e
Merge pull request #4696 from BerriAI/litellm_guardrail_logging_only
Allow setting `logging_only` in guardrails config
2024-07-13 21:50:43 -07:00
Krish Dholakia
7bc9a189e7
Merge branch 'main' into litellm_add_azure_ai_pricing 2024-07-13 21:50:26 -07:00
Krrish Dholakia
d475311eb3 test(test_presidio_pii_masking.py): fix presidio test 2024-07-13 21:44:22 -07:00
Krish Dholakia
d0fb685c56
Merge pull request #4706 from BerriAI/litellm_retry_after
Return `retry-after` header for rate limited requests
2024-07-13 21:37:41 -07:00
Krrish Dholakia
de8230ed41 fix(proxy_server.py): fix returning response headers on exception 2024-07-13 19:11:30 -07:00
Ishaan Jaff
4d7d6504b6
Merge pull request #4704 from BerriAI/litellm_debug_mem
[Debug-Utils] Add some useful memory usage debugging utils
2024-07-13 18:44:40 -07:00
Ishaan Jaff
ed5114c680
Merge pull request #4703 from BerriAI/litellm_only_use_internal_use_cache
[Fix Memory Usage] - only use per request tracking if slack alerting is being used
2024-07-13 18:40:22 -07:00
Ishaan Jaff
31783196c0 feat - return size of in memory cache 2024-07-13 18:22:44 -07:00
Ishaan Jaff
759e02bdaa debug mem issues show growth 2024-07-13 18:05:19 -07:00
Ishaan Jaff
69f74c1e6c fix only use per request tracking if slack alerting is being used 2024-07-13 18:01:53 -07:00
Krrish Dholakia
fde434be66 feat(proxy_server.py): return 'retry-after' param for rate limited requests
Closes https://github.com/BerriAI/litellm/issues/4695
2024-07-13 17:15:20 -07:00
Krrish Dholakia
bc9fe23ebf fix: cleanup 2024-07-13 16:36:04 -07:00
Krrish Dholakia
b1be355d42 build(model_prices_and_context_window.json): add azure ai jamba instruct pricing + token details
Adds jamba instruct, mistral, llama3 pricing + token info for azure_ai
2024-07-13 16:34:31 -07:00
Krish Dholakia
bc58e44d8f
Merge pull request #4701 from BerriAI/litellm_rpm_support_passthrough
Support key-rpm limits on pass-through endpoints
2024-07-13 15:22:29 -07:00
Ishaan Jaff
1206b0b6a9
Merge pull request #4693 from BerriAI/litellm_bad_req_error_mapping
fix -  Raise `BadRequestError` when passing the wrong role
2024-07-13 15:05:54 -07:00
Krrish Dholakia
da4bd47e3e test: test fixes 2024-07-13 15:04:13 -07:00
Krrish Dholakia
77325358b4 fix(pass_through_endpoints.py): fix client init 2024-07-13 14:46:56 -07:00
Ishaan Jaff
c1a9881d5c
Merge pull request #4697 from BerriAI/litellm_fix_sso_bug
[Fix] Bug - Clear user_id from cache when /user/update is called
2024-07-13 14:39:47 -07:00
Krrish Dholakia
7e769f3b89 fix: fix linting errors 2024-07-13 14:39:42 -07:00
Ishaan Jaff
fad37a969b ui new build 2024-07-13 14:38:13 -07:00
Krrish Dholakia
55e153556a test(test_pass_through_endpoints.py): add test for rpm limit support 2024-07-13 13:49:20 -07:00
Krrish Dholakia
0cc273d77b feat(pass_through_endpoint.py): support enforcing key rpm limits on pass through endpoints
Closes https://github.com/BerriAI/litellm/issues/4698
2024-07-13 13:29:44 -07:00
Ishaan Jaff
bba748eaf4 fix test rules 2024-07-13 13:23:23 -07:00
Ishaan Jaff
a447e4dd1a delete updated / deleted values from cache 2024-07-13 13:16:57 -07:00
Ishaan Jaff
56b69eba18 test updating user role 2024-07-13 13:13:40 -07:00
Ishaan Jaff
893ed4e5f1 correctly clear cache when updating a user 2024-07-13 12:33:43 -07:00
Ishaan Jaff
bc91025307 use wrapper on /user endpoints 2024-07-13 12:29:15 -07:00
Ishaan Jaff
677db38f8b add doc string to explain what delete cache does 2024-07-13 12:25:31 -07:00
Krrish Dholakia
6b78e39600 feat(guardrails.py): allow setting logging_only in guardrails_config for presidio pii masking integration 2024-07-13 12:22:17 -07:00
Ishaan Jaff
670bf1b98d correctly flush cache when updating user 2024-07-13 12:05:09 -07:00
Krrish Dholakia
f2522867ed fix(types/guardrails.py): add 'logging_only' param support 2024-07-13 11:44:37 -07:00
Krrish Dholakia
caa01d20cb build: re-run ci/cd 2024-07-13 11:41:35 -07:00
Ishaan Jaff
bcc89a2c3a fix testing exception mapping 2024-07-13 11:10:13 -07:00
Ishaan Jaff
d0dbc0742b fix exception raised in factory.py 2024-07-13 09:55:04 -07:00
Ishaan Jaff
c7f74b0297 test - test_completion_bedrock_invalid_role_exception 2024-07-13 09:54:32 -07:00
Ishaan Jaff
23cccba070 fix str from BadRequestError 2024-07-13 09:54:32 -07:00
Ishaan Jaff
03933de775 fix exception raised in factory.py 2024-07-13 09:54:32 -07:00
Krish Dholakia
66cedccd6b
Merge pull request #4686 from BerriAI/litellm_custom_chat_endpoints
docs(pass_through.md): Creating custom chat endpoints on proxy
2024-07-13 09:45:17 -07:00
Ishaan Jaff
8203174faf ci/cd run again 2024-07-12 19:08:59 -07:00
Ishaan Jaff
2758a9165b test_async_response_azure 2024-07-12 19:04:05 -07:00