Commit graph

3232 commits

Author SHA1 Message Date
ishaan-jaff
96cb6f3b10 (fix) azure+stream: count completion tokens 2024-01-03 12:06:39 +05:30
ishaan-jaff
f3b8d9c3ef (fix) counting response tokens+streaming 2024-01-03 12:06:39 +05:30
Krrish Dholakia
cd98d256b5 fix(proxy_server.py): add alerting for responses taking too long
https://github.com/BerriAI/litellm/issues/1298
2024-01-03 11:18:21 +05:30
Krrish Dholakia
0d13c51615 fix(proxy/utils.py): fix self.alerting null case
https://github.com/BerriAI/litellm/issues/1298#issuecomment-1874798056
2024-01-03 10:12:21 +05:30
Krrish Dholakia
a778f8a00e bump: version 1.16.10 → 1.16.11 2024-01-02 22:26:47 +05:30
Krrish Dholakia
070520d237 fix(proxy_server.py): support smtp email auth
previously was a hard resend package dependency. removed in favor of allowing any smtp server connection (including resend)
2024-01-02 22:22:19 +05:30
Ased Mammad
c39c8f70eb
fix(proxy_server.py) Check when '_hidden_params' is None 2024-01-02 19:04:51 +03:30
Krrish Dholakia
940569703e feat(proxy_server.py): add slack alerting to proxy server
add alerting for calls hanging, failing and db read/writes failing

https://github.com/BerriAI/litellm/issues/1298
2024-01-02 17:45:18 +05:30
ishaan-jaff
c27b1fc5f8 (feat) proxy swagger - make admin link clickable 2024-01-02 17:04:32 +05:30
ishaan-jaff
14738ec89d (test) xinference on litellm router 2024-01-02 16:51:08 +05:30
ishaan-jaff
bfbed2d93d (test) xinference embeddings 2024-01-02 15:41:51 +05:30
ishaan-jaff
790dcff5e0 (feat) add xinference as an embedding provider 2024-01-02 15:32:26 +05:30
Krrish Dholakia
0fffcc1579 fix(utils.py): support token counting for gpt-4-vision models 2024-01-02 14:41:42 +05:30
ishaan-jaff
bfae0fe935 (test) proxy - pass user_config 2024-01-02 14:15:03 +05:30
ishaan-jaff
075eb1a516 (types) routerConfig 2024-01-02 14:14:29 +05:30
ishaan-jaff
9afdc8b4ee (feat) add Router init Pydantic Type 2024-01-02 13:30:24 +05:30
ishaan-jaff
1f8fc6d2a7 (feat) litellm add types for completion, embedding request 2024-01-02 12:27:08 +05:30
ishaan-jaff
6d2b9fd470 (feat) use - user router for aembedding 2024-01-02 12:27:08 +05:30
Krrish Dholakia
2ab31bcaf8 fix(lowest_tpm_rpm.py): handle null case for text/message input 2024-01-02 12:24:29 +05:30
ishaan-jaff
0acaaf8f8f (test) sustained load test proxy 2024-01-02 12:10:34 +05:30
ishaan-jaff
31a896908b (test) proxy - use, user provided model_list 2024-01-02 12:10:34 +05:30
ishaan-jaff
ddc31c4810 (feat) proxy - use user_config for /chat/compeltions 2024-01-02 12:10:34 +05:30
Krrish Dholakia
a37a18ca80 feat(router.py): add support for retry/fallbacks for async embedding calls 2024-01-02 11:54:28 +05:30
Krrish Dholakia
c12e3bd565 fix(router.py): fix model name passed through 2024-01-02 11:15:30 +05:30
Krrish Dholakia
dff4c172d0 refactor(test_router_caching.py): move tpm/rpm routing tests to separate file 2024-01-02 11:10:11 +05:30
ishaan-jaff
18ef244230 (test) bedrock-test passing boto3 client 2024-01-02 10:23:28 +05:30
ishaan-jaff
d1e8d13c4f (fix) init_bedrock_client 2024-01-01 22:48:56 +05:30
Ishaan Jaff
9adcfedc04
(test) fix test_get_model_cost_map.py 2024-01-01 21:58:48 +05:30
Krrish Dholakia
a83e2e07cf fix(router.py): correctly raise no model available error
https://github.com/BerriAI/litellm/issues/1289
2024-01-01 21:22:42 +05:30
Ishaan Jaff
9cb5a2bec0
Merge pull request #1290 from fcakyon/patch-1
fix typos & add missing names for azure models
2024-01-01 17:58:17 +05:30
Krrish Dholakia
e1e3721917 build(user.py): fix page param read issue 2024-01-01 17:25:52 +05:30
Krrish Dholakia
a41e56a730 fix(proxy_server.py): enabling user auth via ui
https://github.com/BerriAI/litellm/issues/1231
2024-01-01 17:14:24 +05:30
fatih
6566ebd815
update azure turbo namings 2024-01-01 13:03:08 +03:00
Krrish Dholakia
ca40a88987 fix(proxy_server.py): check if user email in user db 2024-01-01 14:19:59 +05:30
ishaan-jaff
7623c1a846 (feat) proxy - only use print_verbose 2024-01-01 13:52:11 +05:30
ishaan-jaff
84cfa1c42a (test) ci/cd 2024-01-01 13:51:27 +05:30
Krrish Dholakia
24e7dc359d feat(proxy_server.py): introduces new /user/auth endpoint for handling user email auth
decouples streamlit ui from proxy server. this then requires the proxy to handle user auth separately.
2024-01-01 13:44:47 +05:30
ishaan-jaff
52db2a6040 (feat) proxy - remove streamlit ui on startup 2024-01-01 12:54:23 +05:30
ishaan-jaff
c8f8bd9e57 (test) proxy - log metadata to langfuse 2024-01-01 11:54:16 +05:30
ishaan-jaff
694956b44e (test) proxy - pass metadata to openai client 2024-01-01 11:12:57 +05:30
ishaan-jaff
dacd86030b (fix) proxy - remove extra print statemet 2024-01-01 10:52:09 +05:30
ishaan-jaff
16fb83e007 (fix) proxy - remove errant print statement 2024-01-01 10:48:12 +05:30
ishaan-jaff
84fbc903aa (test) langfuse - set custom trace_id 2023-12-30 20:19:22 +05:30
ishaan-jaff
8ae4554a8a (feat) langfuse - set custom trace_id, trace_user_id 2023-12-30 20:19:03 +05:30
ishaan-jaff
cc7b964433 (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
ishaan-jaff
70cdc16d6f (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
e35f17ca3c (test) caching - context managers 2023-12-30 19:33:47 +05:30
ishaan-jaff
ddddfe6602 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
8ff3bbcfee fix(proxy_server.py): router model group alias routing
check model alias group routing before specific deployment routing, to deal with an alias being the same as a deployment name (e.g. gpt-3.5-turbo)

n
2023-12-30 17:55:24 +05:30
Krrish Dholakia
027218c3f0 test(test_lowest_latency_routing.py): add more tests 2023-12-30 17:41:42 +05:30