Krrish Dholakia
|
fbfcd57798
|
fix(proxy_server.py): fix linting issue
|
2024-04-04 19:15:57 -07:00 |
|
Krrish Dholakia
|
e3c2bdef4d
|
feat(ui): add models via ui
adds ability to add models via ui to the proxy. also fixes additional bugs around new /model/new endpoint
|
2024-04-04 18:56:20 -07:00 |
|
Ishaan Jaff
|
ac5507bd84
|
ui show spend per tag
|
2024-04-04 16:57:45 -07:00 |
|
Ishaan Jaff
|
1119cc49a8
|
Merge pull request #2840 from BerriAI/litellm_return_cache_key_responses
[FEAT] Proxy - Delete Cache Keys + return cache key in responses
|
2024-04-04 11:52:52 -07:00 |
|
Ishaan Jaff
|
7e1d5c81b4
|
return cache key in streming responses
|
2024-04-04 11:00:00 -07:00 |
|
Ishaan Jaff
|
c4cb0afa98
|
feat - delete cache key
|
2024-04-04 10:56:47 -07:00 |
|
Krrish Dholakia
|
592241b4eb
|
fix(proxy_server.py): fix linting error
|
2024-04-04 10:40:32 -07:00 |
|
Ishaan Jaff
|
9dc4127576
|
v0 return cache key in responses
|
2024-04-04 10:11:18 -07:00 |
|
Krrish Dholakia
|
4b56f08cbe
|
test(test_models.py): fix delete model test
|
2024-04-04 08:46:08 -07:00 |
|
Krrish Dholakia
|
346cd1876b
|
fix: raise correct error
|
2024-04-03 22:37:51 -07:00 |
|
Krish Dholakia
|
6bc48d7e8d
|
Merge branch 'main' into litellm_model_add_api
|
2024-04-03 20:29:44 -07:00 |
|
Krrish Dholakia
|
f536fb13e6
|
fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances
https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
|
2024-04-03 20:16:41 -07:00 |
|
Ishaan Jaff
|
6edaaa92ab
|
fix team update bug
|
2024-04-03 19:38:07 -07:00 |
|
Krrish Dholakia
|
15e0099948
|
fix(proxy_server.py): return original model response via response headers - /v1/completions
to help devs with debugging
|
2024-04-03 13:05:43 -07:00 |
|
Krrish Dholakia
|
8f24202c83
|
fix(proxy_server.py): support calling public endpoints when jwt_auth is enabled
|
2024-04-03 07:56:53 -07:00 |
|
Ishaan Jaff
|
8a8233e428
|
fix safe use token id
|
2024-04-02 21:40:35 -07:00 |
|
Ishaan Jaff
|
15685a8f53
|
v0 use token_in /key_generate
|
2024-04-02 21:31:24 -07:00 |
|
Ishaan Jaff
|
1aeccf3f0a
|
proxy test all-tea-models
|
2024-04-02 20:50:47 -07:00 |
|
Krrish Dholakia
|
d7601a4844
|
perf(proxy_server.py): batch write spend logs
reduces prisma client errors, by batch writing spend logs - max 1k logs at a time
|
2024-04-02 18:46:55 -07:00 |
|
Ishaan Jaff
|
3245d8cdce
|
support all-proxy-models for teams
|
2024-04-02 16:04:09 -07:00 |
|
Ishaan Jaff
|
b83c452ddd
|
support all-models-on-proxy
|
2024-04-02 15:52:54 -07:00 |
|
Ishaan Jaff
|
73ef4780f7
|
(fix) support all-models alias on backend
|
2024-04-02 15:12:37 -07:00 |
|
Ishaan Jaff
|
3d32567f4c
|
fix show correct team based usage
|
2024-04-02 13:43:33 -07:00 |
|
Ishaan Jaff
|
327cf73d73
|
fix left join on litellm team table
|
2024-04-02 13:36:22 -07:00 |
|
Krish Dholakia
|
7233e5ab25
|
Merge pull request #2789 from BerriAI/litellm_set_ttl
fix(proxy_server.py): allow user to set in-memory + redis ttl
|
2024-04-02 08:53:34 -07:00 |
|
Ishaan Jaff
|
92984a1c6f
|
Merge pull request #2788 from BerriAI/litellm_support_-_models
[Feat] Allow using model = * on proxy config.yaml
|
2024-04-01 19:46:50 -07:00 |
|
Krish Dholakia
|
da85384649
|
Merge pull request #2787 from BerriAI/litellm_optional_team_jwt_claim
fix(proxy_server.py): don't require scope for team-based jwt access
|
2024-04-01 19:16:39 -07:00 |
|
Krrish Dholakia
|
c096ba566f
|
fix(proxy_server.py): fix cache param arg name
|
2024-04-01 19:14:39 -07:00 |
|
Krrish Dholakia
|
203e2776f8
|
fix(proxy_server.py): allow user to set in-memory + redis ttl
addresses - https://github.com/BerriAI/litellm/issues/2700
|
2024-04-01 19:13:23 -07:00 |
|
Ishaan Jaff
|
037b624c89
|
(fix) allow wildcard models
|
2024-04-01 19:07:05 -07:00 |
|
Krrish Dholakia
|
c52819d47c
|
fix(proxy_server.py): don't require scope for team-based jwt access
If team with the client_id exists then it should be allowed to make a request, if it doesn't then as we discussed it should return an error
|
2024-04-01 18:52:00 -07:00 |
|
Ishaan Jaff
|
b14b6083f5
|
Merge pull request #2785 from BerriAI/litellm_high_traffic_redis_caching_fixes
[Feat] Proxy - high traffic redis caching - when using `url`
|
2024-04-01 18:38:27 -07:00 |
|
Krrish Dholakia
|
6467dd4e11
|
fix(tpm_rpm_limiter.py): fix cache init logic
|
2024-04-01 18:01:38 -07:00 |
|
Ishaan Jaff
|
9accc544e9
|
add /cache/redis/info endpoint
|
2024-04-01 16:51:23 -07:00 |
|
Ishaan Jaff
|
d5d800e141
|
(fix) _update_end_user_cache
|
2024-04-01 11:18:00 -07:00 |
|
Krrish Dholakia
|
c9e6b05cfb
|
test(test_max_tpm_rpm_limiter.py): add unit testing for redis namespaces working for tpm/rpm limits
|
2024-04-01 10:39:03 -07:00 |
|
Krish Dholakia
|
1356f6cd32
|
Merge pull request #2775 from BerriAI/litellm_redis_user_api_key_cache_v3
fix(tpm_rpm_limiter.py): enable redis caching for tpm/rpm checks on keys/user/teams
|
2024-03-30 22:07:05 -07:00 |
|
Krrish Dholakia
|
f58fefd589
|
fix(tpm_rpm_limiter.py): enable redis caching for tpm/rpm checks on keys/user/teams
allows tpm/rpm checks to work across instances
https://github.com/BerriAI/litellm/issues/2730
|
2024-03-30 20:01:36 -07:00 |
|
Ishaan Jaff
|
23a18d4be3
|
(ui) show proxy spend
|
2024-03-30 14:02:43 -07:00 |
|
Ishaan Jaff
|
8daca76566
|
(ui) view spend by team name on usage
|
2024-03-30 13:25:32 -07:00 |
|
Krrish Dholakia
|
0342cd3b6b
|
fix(proxy_server.py): support azure openai text completion calls
|
2024-03-30 11:30:06 -07:00 |
|
Krrish Dholakia
|
af2eabba91
|
fix(proxy_server.py): fix /key/update endpoint to update key duration
also adds a test for this to our ci/cd
|
2024-03-29 21:47:10 -07:00 |
|
Krish Dholakia
|
6d9887969f
|
Merge pull request #2757 from BerriAI/litellm_fix_budget_alerts
fix(auth_checks.py): make global spend checks more accurate
|
2024-03-29 21:13:27 -07:00 |
|
Krrish Dholakia
|
48ac36e70d
|
docs(proxy_server.py): fix example on swagger for team member delete
|
2024-03-29 20:09:54 -07:00 |
|
Krrish Dholakia
|
3810b050c1
|
fix(proxy_server.py): increment cached global proxy spend object
|
2024-03-29 20:02:31 -07:00 |
|
Krrish Dholakia
|
5280fc809f
|
fix(proxy_server.py): enforce end user budgets with 'litellm.max_end_user_budget' param
|
2024-03-29 17:14:40 -07:00 |
|
Krrish Dholakia
|
786116783f
|
fix(proxy_server.py): fix max budget check to also fire slack alert
|
2024-03-29 16:24:40 -07:00 |
|
Krrish Dholakia
|
be6481bb36
|
fix(proxy_server.py): fix checks
|
2024-03-29 15:34:13 -07:00 |
|
Krrish Dholakia
|
d8c15a5677
|
fix(auth_checks.py): make global spend checks more accurate
|
2024-03-29 14:57:44 -07:00 |
|
Ishaan Jaff
|
7df2d7cb33
|
(fix) show correct spend on ui
|
2024-03-29 09:41:00 -07:00 |
|