Commit graph

3089 commits

Author SHA1 Message Date
Krrish Dholakia
bd7af04a72 feat(proxy_server.py): support custom llm handler on proxy 2024-07-25 17:56:34 -07:00
Ishaan Jaff
2432c90515 feat - support health check audio_speech 2024-07-25 17:26:14 -07:00
Krrish Dholakia
bfdda089c8 fix(proxy_server.py): check if input list > 0 before indexing into it
resolves 'list index out of range' error
2024-07-25 14:23:07 -07:00
Krrish Dholakia
4e51f712f3 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00
Krrish Dholakia
b376ee71b0 fix(internal_user_endpoints.py): support updating budgets for /user/update 2024-07-24 21:51:46 -07:00
Krrish Dholakia
6ab2527fdc feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
2024-07-24 18:14:49 -07:00
Krrish Dholakia
b5c5ed2209 fix(key_management_endpoints.py): if budget duration set, set budget_reset_at 2024-07-24 15:02:22 -07:00
Krrish Dholakia
99d8b0ad68 fix(bedrock_httpx.py): fix async client check 2024-07-24 12:43:52 -07:00
Marc Abramowitz
642f1a7bcc Check existence of multiple views in 1 query
instead of multiple queries. This is more efficient because it lets us check for
all views in one query instead of multiple queries.
2024-07-23 21:55:02 -07:00
Krrish Dholakia
fb0a13c8bb fix(anthropic.py): support openai system message being a list 2024-07-23 21:45:56 -07:00
Ishaan Jaff
92708a2737
Merge pull request #4841 from BerriAI/litellm_anthropic_cost_tracking
[Feat] - /v1/messages support usage tracking on spendLogs
2024-07-23 16:37:42 -07:00
Ishaan Jaff
78eb5164df fix DB accept null values for api_base, user, etc 2024-07-23 16:33:04 -07:00
Ishaan Jaff
a71b60d005 Pass litellm proxy specific metadata 2024-07-23 15:31:30 -07:00
Ishaan Jaff
4c1ee1e282 fix add better debugging _PROXY_track_cost_callback 2024-07-23 15:25:46 -07:00
Krrish Dholakia
f64a3309d1 fix(utils.py): support raw response headers for streaming requests 2024-07-23 11:58:58 -07:00
Krrish Dholakia
dcb974dd1e feat(utils.py): support passing openai response headers to client, if enabled
Allows openai/openai-compatible provider response headers to be sent to client, if 'return_response_headers' is enabled
2024-07-23 11:30:52 -07:00
Ishaan Jaff
b137207ae6 doc alert_to_webhook_url 2024-07-23 11:04:15 -07:00
Ishaan Jaff
0ec2e9aa27 feat alert_to_webhook_url 2024-07-23 10:09:24 -07:00
Ishaan Jaff
d116ff280e feat - set alert_to_webhook_url 2024-07-23 10:08:21 -07:00
Ishaan Jaff
c1593c0cd1 update alert_to_webhook_url 2024-07-23 10:07:08 -07:00
Ishaan Jaff
69091f31df feat - add success_Callback per request 2024-07-23 08:43:01 -07:00
Ishaan Jaff
24ae0119d1 add debug logging for team callback settings 2024-07-23 08:41:05 -07:00
Ishaan Jaff
1a33c40713 add endpoint to disable logging for a team 2024-07-23 08:38:23 -07:00
Krish Dholakia
e770ac48a2
Merge branch 'main' into litellm_braintrust_integration 2024-07-22 22:40:39 -07:00
Krish Dholakia
c4db6aa15e
Merge pull request #4810 from BerriAI/litellm_team_modify_guardrails
feat(auth_checks.py): Allow admin to disable team from turning on/off guardrails
2024-07-22 22:32:24 -07:00
Krrish Dholakia
1a83935aa4 fix(proxy/utils.py): add stronger typing for litellm params in failure call logging 2024-07-22 21:31:39 -07:00
Ishaan Jaff
290628d66b
Merge pull request #4831 from BerriAI/litellm_api_control_team_logging
[Feat] - API Endpoints to control logging callbacks per Team
2024-07-22 21:22:14 -07:00
Krrish Dholakia
548e4f53f8 feat(redact_messages.py): allow remove sensitive key information before passing to logging integration 2024-07-22 20:58:02 -07:00
Ishaan Jaff
2e5da5ea18 GET endpoint to get team callbacks 2024-07-22 20:46:20 -07:00
Ishaan Jaff
447bab4d46 only allow unique callbacks for team callbacks 2024-07-22 20:43:42 -07:00
Ishaan Jaff
5ed82ba5ce feat add return types on team/callback 2024-07-22 20:35:27 -07:00
Ishaan Jaff
dcd8f7ebf2 control team callbacks using API 2024-07-22 18:29:21 -07:00
Ishaan Jaff
c9e2f977dd feat - return team_metadata in user_api_key_auth 2024-07-22 18:21:50 -07:00
Ishaan Jaff
c34c123fe3 feat - add endpoint to set team callbacks 2024-07-22 18:18:09 -07:00
Ishaan Jaff
ff768e7571 types - AddTeamCallback 2024-07-22 18:11:59 -07:00
Krrish Dholakia
e4ab50e1a1 feat(braintrust_logging.py): working braintrust logging for successful calls 2024-07-22 17:04:55 -07:00
Ishaan Jaff
15c109f023 fix checking if _known_custom_logger_compatible_callbacks 2024-07-22 15:43:43 -07:00
Ishaan Jaff
673105c88f update tests 2024-07-22 14:44:47 -07:00
Ishaan Jaff
b64755d2a1 check is_llm_api_route 2024-07-22 14:43:30 -07:00
Ishaan Jaff
b54b1d958b track anthropic_routes 2024-07-22 14:41:12 -07:00
Ishaan Jaff
41e6c4a573 feat - arize ai log llm i/o 2024-07-22 13:47:58 -07:00
Krrish Dholakia
f10af7596c fix(utils.py): allow dropping extra_body in additional_drop_params
Fixes https://github.com/BerriAI/litellm/issues/4769
2024-07-20 19:12:58 -07:00
Krrish Dholakia
8b3c8102a7 feat(auth_checks.py): Allow admin to disable team from turning on/off guardrails. 2024-07-20 18:39:05 -07:00
Ishaan Jaff
8eb839863d bump: version 1.41.25 → 1.41.26 2024-07-20 18:20:54 -07:00
Ishaan Jaff
cf8ec4a888 ui new build 2024-07-20 18:18:54 -07:00
Krrish Dholakia
86c9e05c10 fix(openai.py): drop invalid params if drop_params: true for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4800
2024-07-20 15:08:15 -07:00
Krish Dholakia
f797597202
Merge branch 'main' into litellm_proxy_team_cache_update 2024-07-19 21:07:26 -07:00
Krrish Dholakia
99aa311083 fix(user_api_key_auth.py): update team values in token cache if refreshed more recently 2024-07-19 17:35:59 -07:00
Krrish Dholakia
35e640076b fix(user_api_key_auth.py): update valid token cache with updated team object cache 2024-07-19 17:06:49 -07:00
Krish Dholakia
f973bd8109
Merge branch 'main' into litellm_ai21_jamba 2024-07-19 15:11:55 -07:00