Commit graph

1757 commits

Author SHA1 Message Date
Ishaan Jaff
b7dabe9bed fix literal usage 2024-05-30 14:28:53 -07:00
Ishaan Jaff
89bb364775 fix user_role test 2024-05-30 14:10:08 -07:00
Ishaan Jaff
ff33241ce2 feat - move to using enums for roles 2024-05-30 13:55:45 -07:00
Ishaan Jaff
0dd84b66c2 fix - verbose_proxy_logger debug 2024-05-30 08:09:31 -07:00
Krish Dholakia
7ef23e6fe9 Merge pull request #3910 from BerriAI/litellm_fix_end_user_max_budget
fix(proxy_server.py): fix end user object check when master key used
2024-05-29 18:04:56 -07:00
Ishaan Jaff
60d62c06a5 fix batch tags 2024-05-29 17:51:39 -07:00
Ishaan Jaff
2c35d2ee95 Merge pull request #3897 from BerriAI/litellm_fix_latency_cal
ui  - fix latency analytics on `completion_tokens`
2024-05-29 17:32:17 -07:00
Ishaan Jaff
88e13b8f8b Merge pull request #3894 from BerriAI/litellm_update_analytics_tab
[Feat] UI update analytics tab to show human friendly usage vals
2024-05-29 17:31:21 -07:00
Krrish Dholakia
bb3c786194 fix(proxy_server.py): fix end user object check when master key used
check if end user max budget exceeded for master key
2024-05-29 17:20:59 -07:00
Krrish Dholakia
df0365175a feat(proxy_server.py): emit webhook event whenever customer spend is tracked
Closes https://github.com/BerriAI/litellm/issues/3903
2024-05-29 15:59:32 -07:00
Krrish Dholakia
a4dae8e9f1 docs(customers.md): add customer cost tracking to docs 2024-05-29 14:55:33 -07:00
Ishaan Jaff
0521fec230 fix(proxy_server.py): fix latency calc for avg output token 2024-05-29 09:49:45 -07:00
Ishaan Jaff
467e6dd7dd ui - fix latency analytics completion_tokens 2024-05-29 09:39:53 -07:00
Ishaan Jaff
41604b53a8 fix /global/activity/model 2024-05-29 08:58:25 -07:00
Krish Dholakia
4fd3994b4e Merge branch 'main' into litellm_batch_completions 2024-05-28 22:38:05 -07:00
Ishaan Jaff
d8245cbccb Merge pull request #3888 from BerriAI/litellm_add_files_proxy
[Feat] LiteLLM Proxy Add `POST /v1/files` and `GET /v1/files`
2024-05-28 22:36:33 -07:00
Krrish Dholakia
012bde0b07 fix(router.py): support comma-separated model list for batch completion fastest response 2024-05-28 21:34:37 -07:00
Krrish Dholakia
792b25c772 feat(proxy_server.py): enable batch completion fastest response calls on proxy
introduces new `fastest_response` flag for enabling the call
2024-05-28 20:09:31 -07:00
Ishaan Jaff
18830e58e9 feat - add file endpoints proxy 2024-05-28 20:03:13 -07:00
Krrish Dholakia
4dd7b05406 feat(proxy_server.py): give request-level breakdown if ttft metric is selected for ju
st that day
2024-05-28 18:09:22 -07:00
Ishaan Jaff
ea609a0b14 fear support GET /v1/batches{batch_id} 2024-05-28 17:58:34 -07:00
Ishaan Jaff
e14e5e8f8e feat add v1/batches 2024-05-28 17:49:36 -07:00
Ishaan Jaff
b38196fbd9 fix app mounting ui 2024-05-28 08:14:15 -07:00
Ishaan Jaff
f878c54986 Merge pull request #3874 from BerriAI/litellm_make_email_alerts_free
[Feat] Add, Test Email Alerts on Admin UI
2024-05-28 08:09:14 -07:00
Krrish Dholakia
a4634ef6d2 fix(proxy_server.py): handle next js route serving on fastapi 2024-05-28 01:09:05 -07:00
Krrish Dholakia
991fc9563a feat: e2e flow complete - admin can invite new users to proxy via invite links
Completes https://github.com/BerriAI/litellm/issues/3863
2024-05-27 23:19:30 -07:00
Ishaan Jaff
c8d3af1117 fix - testing email alerting 2024-05-27 22:46:45 -07:00
Krish Dholakia
e2c15e84a0 Merge branch 'main' into litellm_invite_users_via_link 2024-05-27 20:36:43 -07:00
Krrish Dholakia
b5e17de0bc feat(proxy_server.py): allow admin to invite users via invite link
Closes https://github.com/BerriAI/litellm/issues/3863
2024-05-27 20:32:25 -07:00
Ishaan Jaff
46d506169a Merge pull request #3870 from BerriAI/litellm_rename_end_user
[Feat] Rename `/end/user/new` -> `/customer/new` (maintain backwards compatibility)
2024-05-27 19:42:14 -07:00
Ishaan Jaff
a27b1dd13d fix - working customer/delete 2024-05-27 19:24:20 -07:00
Krish Dholakia
7f4a6ad956 Merge pull request #3869 from BerriAI/litellm_public_model_hubg
Admin UI - Public model hub
2024-05-27 19:14:11 -07:00
Ishaan Jaff
39936fa726 fix - /customer/update 2024-05-27 19:02:20 -07:00
Ishaan Jaff
d9b7d078de feat - add validation for existing customers 2024-05-27 18:29:09 -07:00
Ishaan Jaff
a997e8f811 docs string - > end user /new 2024-05-27 18:20:44 -07:00
Ishaan Jaff
a90b18a741 feat - rename end_user -> customer 2024-05-27 18:17:46 -07:00
Krrish Dholakia
abdc5b357f feat(model_hub.tsx): enable admin to expose a public model hub 2024-05-27 17:45:45 -07:00
Ishaan Jaff
5fed67dcc3 Merge pull request #3868 from BerriAI/litellm_show_updated_created_models
[Feat] Show Created at, Created by on `Models` Page
2024-05-27 16:32:29 -07:00
Ishaan Jaff
4d14366ae3 backend - add audit logs for adding models 2024-05-27 16:29:51 -07:00
Ishaan Jaff
d7043baf6d router - include updated at and created at in model info 2024-05-27 15:53:16 -07:00
Ishaan Jaff
ddfb76a770 fix - show activity by model_group 2024-05-27 11:25:16 -07:00
Krrish Dholakia
56fd0c60d1 fix(parallel_request_limiter.py): fix user+team tpm/rpm limit check
Closes https://github.com/BerriAI/litellm/issues/3788
2024-05-27 08:48:23 -07:00
Krrish Dholakia
8e9a3fef81 feat(proxy_server.py): expose new /model_group/info endpoint
returns model-group level info on supported params, max tokens, pricing, etc.
2024-05-26 14:07:35 -07:00
Krrish Dholakia
a78be8d6f3 build(ui): add enterprise cta 2024-05-25 21:41:53 -07:00
Krish Dholakia
960fa8b326 Merge pull request #3844 from BerriAI/litellm_region_based_alerts
feat(slack_alerting.py): enable provider-region based alerting
2024-05-25 21:03:16 -07:00
Krrish Dholakia
1f42d086d6 feat(ui): allow admin to configure slack alerting thresholds on ui 2024-05-25 21:01:19 -07:00
Ishaan Jaff
1c8bfe827d Merge pull request #3842 from BerriAI/litellm_set_budget_dur
[Fix] Set budget_duration on `/team/new` and `/team/update`
2024-05-25 16:31:39 -07:00
Ishaan Jaff
7928b3aea0 fix - test team_alias 2024-05-25 15:42:52 -07:00
Ishaan Jaff
8efa76f1a4 fix - updating team 2024-05-25 15:31:25 -07:00
Ishaan Jaff
61bc6fae63 test - new_team 2024-05-25 15:20:49 -07:00