Commit graph

483 commits

Author SHA1 Message Date
Ishaan Jaff
517f577292 fix - dont send alert on fail request 2024-04-22 16:07:58 -07:00
Ishaan Jaff
cd3b2a21c1 ui - find all teams 2024-04-22 14:15:09 -07:00
Ishaan Jaff
094583f18e feat - show langfuse trace in alerts 2024-04-22 08:51:46 -07:00
Ishaan Jaff
ddc71d766a fix - slack alerting show input in the api_base 2024-04-20 13:16:47 -07:00
Ishaan Jaff
6d92b13c22 feat - log team_alias to langfuse 2024-04-19 10:29:42 -07:00
Ishaan Jaff
6f948cd559 fix - show api_base in hanging requests 2024-04-18 21:01:26 -07:00
Ishaan Jaff
f04604910b fix - show api base on hanging requests 2024-04-18 20:57:22 -07:00
Ishaan Jaff
554c83fdaf ui - show all alert types when getting all callbacks 2024-04-18 20:08:13 -07:00
Krish Dholakia
77a353d484
Merge pull request #3144 from BerriAI/litellm_prometheus_latency_tracking
feat(prometheus_services.py): emit proxy latency for successful llm api requests
2024-04-18 19:10:58 -07:00
Ishaan Jaff
d9091dcf97 fix order by spend 2024-04-18 17:33:38 -07:00
Ishaan Jaff
b669e2987b fix return key aliases on /user/info 2024-04-18 17:16:52 -07:00
Krrish Dholakia
919a2876f1 fix(proxy/utils.py): add prometheus failed db request tracking 2024-04-18 16:30:29 -07:00
Krrish Dholakia
d61250109e fix(proxy/utils.py): add call type and duration to proxy_logging failure calls
this is for tracking failed db requests on prometheus
2024-04-18 16:24:36 -07:00
Ishaan Jaff
eb04a929e6
Merge pull request #3112 from BerriAI/litellm_add_alert_types
[Feat] Allow user to select slack alert types to Opt In to
2024-04-18 16:21:33 -07:00
Krrish Dholakia
0f95a824c4 feat(prometheus_services.py): emit proxy latency for successful llm api requests
uses prometheus histogram for this
2024-04-18 16:04:35 -07:00
Ishaan Jaff
e20b05d6dd fix trim messages to first 100 chars 2024-04-18 15:21:31 -07:00
Ishaan Jaff
1cda0db2ca fix - test alerting 2024-04-18 11:40:40 -07:00
Ishaan Jaff
beeee01199 feat return alert types on /config/get/callback 2024-04-17 21:02:10 -07:00
Ishaan Jaff
9a5fd07f16 fix - user based alerting 2024-04-17 20:35:29 -07:00
Ishaan Jaff
52d7fc22bb v0 add types of alerts to slack alerting 2024-04-17 18:16:19 -07:00
Ishaan Jaff
12a01ba096 litellm_add_proxy_base_url in slack alerts 2024-04-17 17:42:28 -07:00
Krrish Dholakia
f4b595ce71 fix(utils.py): return vertex api base for request hanging alerts 2024-04-16 17:53:28 -07:00
Krrish Dholakia
f4c7f4f901 fix(proxy_server.py): support tracking org spend
currently works when org set for jwt auth
2024-04-11 23:01:21 -07:00
Krrish Dholakia
470b7b64c9 fix(proxy/utils.py): fix error message 2024-04-08 20:47:13 -07:00
Krrish Dholakia
6c1444bfaa fix(proxy_server.py): allow mapping a user to an org 2024-04-08 20:45:11 -07:00
Krrish Dholakia
6110d32b1c feat(proxy/utils.py): return api base for request hanging alerts 2024-04-06 15:58:53 -07:00
Krrish Dholakia
e3c2bdef4d feat(ui): add models via ui
adds ability to add models via ui to the proxy. also fixes additional bugs around new /model/new endpoint
2024-04-04 18:56:20 -07:00
Krrish Dholakia
f536fb13e6 fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances

https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
2024-04-03 20:16:41 -07:00
Krrish Dholakia
d7601a4844 perf(proxy_server.py): batch write spend logs
reduces prisma client errors, by batch writing spend logs - max 1k logs at a time
2024-04-02 18:46:55 -07:00
Krrish Dholakia
6467dd4e11 fix(tpm_rpm_limiter.py): fix cache init logic 2024-04-01 18:01:38 -07:00
Krrish Dholakia
9c0aecf9b8 fix(proxy/utils.py): support redis caching for alerting 2024-04-01 16:13:59 -07:00
Krrish Dholakia
3b8e7241b4 fix(proxy/utils.py): uncomment max parallel request limit check 2024-03-30 20:51:59 -07:00
Krrish Dholakia
d9ff13b624 fix(utils.py): set redis_usage_cache to none by default 2024-03-30 20:10:56 -07:00
Krrish Dholakia
f58fefd589 fix(tpm_rpm_limiter.py): enable redis caching for tpm/rpm checks on keys/user/teams
allows tpm/rpm checks to work across instances

https://github.com/BerriAI/litellm/issues/2730
2024-03-30 20:01:36 -07:00
Krrish Dholakia
5280fc809f fix(proxy_server.py): enforce end user budgets with 'litellm.max_end_user_budget' param 2024-03-29 17:14:40 -07:00
Krrish Dholakia
c15ba368e7 fix(proxy_server.py): enable spend tracking for team-based jwt auth 2024-03-28 20:16:22 -07:00
Krrish Dholakia
7c44b32cc2 refactor(proxy/utils.py): add more debug logs 2024-03-28 18:44:35 -07:00
Krish Dholakia
934a9ac2b4
Merge pull request #2722 from BerriAI/litellm_db_perf_improvement
feat(proxy/utils.py): enable updating db in a separate server
2024-03-28 14:56:14 -07:00
Krrish Dholakia
e8d80509b1 test(test_update_spend.py): allow db_client to be none 2024-03-28 13:44:40 -07:00
Krrish Dholakia
082f1e4085 fix(proxy_server.py): allow user to pass in spend logs collector url 2024-03-28 09:14:30 -07:00
Ishaan Jaff
c3f78af2c6
Merge pull request #2728 from BerriAI/litellm_reduce_deep_copies
[FEAT] Proxy - reduce deep copies
2024-03-27 21:26:09 -07:00
Ishaan Jaff
f2e1d938f3 (fix) remove deep copy from all responses 2024-03-27 20:36:53 -07:00
Krrish Dholakia
2926d5a8eb fix(proxy/utils.py): check cache before alerting user 2024-03-27 20:09:15 -07:00
Krrish Dholakia
4eb93832e4 feat(auth_checks.py): enable admin to enforce 'user' param for all openai endpoints 2024-03-27 17:36:27 -07:00
Krrish Dholakia
1e856443e1 feat(proxy/utils.py): enable updating db in a separate server 2024-03-27 16:02:36 -07:00
Krrish Dholakia
e10eb8f6fe feat(llm_guard.py): enable key-specific llm guard check 2024-03-26 17:21:51 -07:00
Ishaan Jaff
5d121a9f3c (fix) stop using f strings with logger 2024-03-25 10:47:18 -07:00
Ishaan Jaff
dad4bd58bc (feat) stop eagerly evaluating fstring 2024-03-25 09:01:42 -07:00
Krrish Dholakia
d91f9a9f50 feat(proxy_server.py): enable llm api based prompt injection checks
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th
e actual llm call using `async_moderation_hook`
2024-03-20 22:43:42 -07:00
Krrish Dholakia
2dfdc8dd69 Revert "Merge pull request #2593 from BerriAI/litellm_reset_budget_fix"
This reverts commit afd363129f, reversing
changes made to c94bc94ad5.
2024-03-19 20:25:41 -07:00