Ishaan Jaff
|
c27246e6f2
|
Update README.md
|
2024-01-03 15:15:24 +05:30 |
|
ishaan-jaff
|
fea0a933ae
|
(test) use s3 buckets cache
|
2024-01-03 15:13:43 +05:30 |
|
ishaan-jaff
|
00364da993
|
(feat) add s3 Bucket as Cache
|
2024-01-03 15:13:43 +05:30 |
|
Krrish Dholakia
|
14e501845f
|
fix(proxy_server.py): add support for setting master key via .env
|
2024-01-03 15:10:25 +05:30 |
|
Krrish Dholakia
|
ef8f1acfa4
|
refactor(proxy_server.py): more debug statements
|
2024-01-03 13:59:41 +05:30 |
|
Krrish Dholakia
|
6c8cc33d02
|
docs(caching.md): fix typo
|
2024-01-03 12:47:16 +05:30 |
|
Krrish Dholakia
|
8cee267a5b
|
fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
|
2024-01-03 12:42:43 +05:30 |
|
ishaan-jaff
|
8772d87947
|
bump: version 1.16.12 → 1.16.13
|
2024-01-03 12:10:22 +05:30 |
|
ishaan-jaff
|
2bea0c742e
|
(test) completion tokens counting + azure stream
|
2024-01-03 12:06:39 +05:30 |
|
ishaan-jaff
|
96cb6f3b10
|
(fix) azure+stream: count completion tokens
|
2024-01-03 12:06:39 +05:30 |
|
ishaan-jaff
|
f3b8d9c3ef
|
(fix) counting response tokens+streaming
|
2024-01-03 12:06:39 +05:30 |
|
Krrish Dholakia
|
5055aeb254
|
docs(alerting.md): add alerting thresholds to docs
|
2024-01-03 11:21:56 +05:30 |
|
Krrish Dholakia
|
cd98d256b5
|
fix(proxy_server.py): add alerting for responses taking too long
https://github.com/BerriAI/litellm/issues/1298
|
2024-01-03 11:18:21 +05:30 |
|
Krrish Dholakia
|
0a6e4db999
|
bump: version 1.16.11 → 1.16.12
|
2024-01-03 10:12:48 +05:30 |
|
Krrish Dholakia
|
0d13c51615
|
fix(proxy/utils.py): fix self.alerting null case
https://github.com/BerriAI/litellm/issues/1298#issuecomment-1874798056
|
2024-01-03 10:12:21 +05:30 |
|
Krrish Dholakia
|
ff4eb5a5d4
|
docs(alerting.md): add slack alerting to docs
|
2024-01-02 22:47:01 +05:30 |
|
Krrish Dholakia
|
d17ffdbc83
|
docs(ui.md): update ui docs to for smtp server hosting
|
2024-01-02 22:30:17 +05:30 |
|
Krrish Dholakia
|
a778f8a00e
|
bump: version 1.16.10 → 1.16.11
|
2024-01-02 22:26:47 +05:30 |
|
Krrish Dholakia
|
070520d237
|
fix(proxy_server.py): support smtp email auth
previously was a hard resend package dependency. removed in favor of allowing any smtp server connection (including resend)
|
2024-01-02 22:22:19 +05:30 |
|
Krish Dholakia
|
89f314b5ee
|
Merge pull request #1300 from asedmammad/patch-1
fix(proxy_server.py) Check when '_hidden_params' is None
|
2024-01-02 21:34:09 +05:30 |
|
Ased Mammad
|
c39c8f70eb
|
fix(proxy_server.py) Check when '_hidden_params' is None
|
2024-01-02 19:04:51 +03:30 |
|
Krrish Dholakia
|
940569703e
|
feat(proxy_server.py): add slack alerting to proxy server
add alerting for calls hanging, failing and db read/writes failing
https://github.com/BerriAI/litellm/issues/1298
|
2024-01-02 17:45:18 +05:30 |
|
Krish Dholakia
|
10b71b0ff1
|
Update README.md
|
2024-01-02 17:22:44 +05:30 |
|
ishaan-jaff
|
c27b1fc5f8
|
(feat) proxy swagger - make admin link clickable
|
2024-01-02 17:04:32 +05:30 |
|
ishaan-jaff
|
8186af64c7
|
(docs) xinference on proxy
|
2024-01-02 16:57:25 +05:30 |
|
ishaan-jaff
|
8f8ac03961
|
(docs) proxy - using xinference
|
2024-01-02 16:55:10 +05:30 |
|
ishaan-jaff
|
14738ec89d
|
(test) xinference on litellm router
|
2024-01-02 16:51:08 +05:30 |
|
Ishaan Jaff
|
5bf44e8a64
|
Update README.md - add xinference [Xorbits Inference]
|
2024-01-02 16:38:35 +05:30 |
|
ishaan-jaff
|
bfbed2d93d
|
(test) xinference embeddings
|
2024-01-02 15:41:51 +05:30 |
|
ishaan-jaff
|
fdd4e72503
|
(docs) xinference embedding
|
2024-01-02 15:39:25 +05:30 |
|
ishaan-jaff
|
790dcff5e0
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
ishaan-jaff
|
0d0ee9e108
|
(docs) passing user config
|
2024-01-02 14:43:02 +05:30 |
|
Krrish Dholakia
|
0fffcc1579
|
fix(utils.py): support token counting for gpt-4-vision models
|
2024-01-02 14:41:42 +05:30 |
|
ishaan-jaff
|
eda6ab8cdc
|
bump: version 1.16.9 → 1.16.10
|
2024-01-02 14:39:12 +05:30 |
|
ishaan-jaff
|
1efd1cb30f
|
(docs) passing user_config to completion
|
2024-01-02 14:19:44 +05:30 |
|
ishaan-jaff
|
bfae0fe935
|
(test) proxy - pass user_config
|
2024-01-02 14:15:03 +05:30 |
|
ishaan-jaff
|
075eb1a516
|
(types) routerConfig
|
2024-01-02 14:14:29 +05:30 |
|
ishaan-jaff
|
60164cd5e4
|
(docs) pass user config to proxy / router
|
2024-01-02 14:14:14 +05:30 |
|
ishaan-jaff
|
9afdc8b4ee
|
(feat) add Router init Pydantic Type
|
2024-01-02 13:30:24 +05:30 |
|
Krrish Dholakia
|
a9f58ec100
|
build(model_prices_and_context_window.json): add azure gpt-4-turbo, gpt-4-turbo-vision, dall-e-2 and dall-e-3 pricing
|
2024-01-02 12:50:15 +05:30 |
|
ishaan-jaff
|
1f8fc6d2a7
|
(feat) litellm add types for completion, embedding request
|
2024-01-02 12:27:08 +05:30 |
|
ishaan-jaff
|
6d2b9fd470
|
(feat) use - user router for aembedding
|
2024-01-02 12:27:08 +05:30 |
|
Krrish Dholakia
|
2ab31bcaf8
|
fix(lowest_tpm_rpm.py): handle null case for text/message input
|
2024-01-02 12:24:29 +05:30 |
|
ishaan-jaff
|
11f92c0074
|
(docs) router- init params
|
2024-01-02 12:14:32 +05:30 |
|
ishaan-jaff
|
0acaaf8f8f
|
(test) sustained load test proxy
|
2024-01-02 12:10:34 +05:30 |
|
ishaan-jaff
|
31a896908b
|
(test) proxy - use, user provided model_list
|
2024-01-02 12:10:34 +05:30 |
|
ishaan-jaff
|
ddc31c4810
|
(feat) proxy - use user_config for /chat/compeltions
|
2024-01-02 12:10:34 +05:30 |
|
Krrish Dholakia
|
a37a18ca80
|
feat(router.py): add support for retry/fallbacks for async embedding calls
|
2024-01-02 11:54:28 +05:30 |
|
Krrish Dholakia
|
c12e3bd565
|
fix(router.py): fix model name passed through
|
2024-01-02 11:15:30 +05:30 |
|
Krrish Dholakia
|
dff4c172d0
|
refactor(test_router_caching.py): move tpm/rpm routing tests to separate file
|
2024-01-02 11:10:11 +05:30 |
|