ishaan-jaff
|
bfbed2d93d
|
(test) xinference embeddings
|
2024-01-02 15:41:51 +05:30 |
|
ishaan-jaff
|
790dcff5e0
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
Krrish Dholakia
|
0fffcc1579
|
fix(utils.py): support token counting for gpt-4-vision models
|
2024-01-02 14:41:42 +05:30 |
|
ishaan-jaff
|
bfae0fe935
|
(test) proxy - pass user_config
|
2024-01-02 14:15:03 +05:30 |
|
ishaan-jaff
|
075eb1a516
|
(types) routerConfig
|
2024-01-02 14:14:29 +05:30 |
|
ishaan-jaff
|
9afdc8b4ee
|
(feat) add Router init Pydantic Type
|
2024-01-02 13:30:24 +05:30 |
|
ishaan-jaff
|
1f8fc6d2a7
|
(feat) litellm add types for completion, embedding request
|
2024-01-02 12:27:08 +05:30 |
|
ishaan-jaff
|
6d2b9fd470
|
(feat) use - user router for aembedding
|
2024-01-02 12:27:08 +05:30 |
|
Krrish Dholakia
|
2ab31bcaf8
|
fix(lowest_tpm_rpm.py): handle null case for text/message input
|
2024-01-02 12:24:29 +05:30 |
|
ishaan-jaff
|
0acaaf8f8f
|
(test) sustained load test proxy
|
2024-01-02 12:10:34 +05:30 |
|
ishaan-jaff
|
31a896908b
|
(test) proxy - use, user provided model_list
|
2024-01-02 12:10:34 +05:30 |
|
ishaan-jaff
|
ddc31c4810
|
(feat) proxy - use user_config for /chat/compeltions
|
2024-01-02 12:10:34 +05:30 |
|
Krrish Dholakia
|
a37a18ca80
|
feat(router.py): add support for retry/fallbacks for async embedding calls
|
2024-01-02 11:54:28 +05:30 |
|
Krrish Dholakia
|
c12e3bd565
|
fix(router.py): fix model name passed through
|
2024-01-02 11:15:30 +05:30 |
|
Krrish Dholakia
|
dff4c172d0
|
refactor(test_router_caching.py): move tpm/rpm routing tests to separate file
|
2024-01-02 11:10:11 +05:30 |
|
ishaan-jaff
|
18ef244230
|
(test) bedrock-test passing boto3 client
|
2024-01-02 10:23:28 +05:30 |
|
ishaan-jaff
|
d1e8d13c4f
|
(fix) init_bedrock_client
|
2024-01-01 22:48:56 +05:30 |
|
Ishaan Jaff
|
9adcfedc04
|
(test) fix test_get_model_cost_map.py
|
2024-01-01 21:58:48 +05:30 |
|
Krrish Dholakia
|
a83e2e07cf
|
fix(router.py): correctly raise no model available error
https://github.com/BerriAI/litellm/issues/1289
|
2024-01-01 21:22:42 +05:30 |
|
Ishaan Jaff
|
9cb5a2bec0
|
Merge pull request #1290 from fcakyon/patch-1
fix typos & add missing names for azure models
|
2024-01-01 17:58:17 +05:30 |
|
Krrish Dholakia
|
e1e3721917
|
build(user.py): fix page param read issue
|
2024-01-01 17:25:52 +05:30 |
|
Krrish Dholakia
|
a41e56a730
|
fix(proxy_server.py): enabling user auth via ui
https://github.com/BerriAI/litellm/issues/1231
|
2024-01-01 17:14:24 +05:30 |
|
fatih
|
6566ebd815
|
update azure turbo namings
|
2024-01-01 13:03:08 +03:00 |
|
Krrish Dholakia
|
ca40a88987
|
fix(proxy_server.py): check if user email in user db
|
2024-01-01 14:19:59 +05:30 |
|
ishaan-jaff
|
7623c1a846
|
(feat) proxy - only use print_verbose
|
2024-01-01 13:52:11 +05:30 |
|
ishaan-jaff
|
84cfa1c42a
|
(test) ci/cd
|
2024-01-01 13:51:27 +05:30 |
|
Krrish Dholakia
|
24e7dc359d
|
feat(proxy_server.py): introduces new /user/auth endpoint for handling user email auth
decouples streamlit ui from proxy server. this then requires the proxy to handle user auth separately.
|
2024-01-01 13:44:47 +05:30 |
|
ishaan-jaff
|
52db2a6040
|
(feat) proxy - remove streamlit ui on startup
|
2024-01-01 12:54:23 +05:30 |
|
ishaan-jaff
|
c8f8bd9e57
|
(test) proxy - log metadata to langfuse
|
2024-01-01 11:54:16 +05:30 |
|
ishaan-jaff
|
694956b44e
|
(test) proxy - pass metadata to openai client
|
2024-01-01 11:12:57 +05:30 |
|
ishaan-jaff
|
dacd86030b
|
(fix) proxy - remove extra print statemet
|
2024-01-01 10:52:09 +05:30 |
|
ishaan-jaff
|
16fb83e007
|
(fix) proxy - remove errant print statement
|
2024-01-01 10:48:12 +05:30 |
|
ishaan-jaff
|
84fbc903aa
|
(test) langfuse - set custom trace_id
|
2023-12-30 20:19:22 +05:30 |
|
ishaan-jaff
|
8ae4554a8a
|
(feat) langfuse - set custom trace_id, trace_user_id
|
2023-12-30 20:19:03 +05:30 |
|
ishaan-jaff
|
cc7b964433
|
(docs) add litellm.cache docstring
|
2023-12-30 20:04:08 +05:30 |
|
ishaan-jaff
|
70cdc16d6f
|
(feat) cache context manager - update cache
|
2023-12-30 19:50:53 +05:30 |
|
ishaan-jaff
|
e35f17ca3c
|
(test) caching - context managers
|
2023-12-30 19:33:47 +05:30 |
|
ishaan-jaff
|
ddddfe6602
|
(feat) add cache context manager
|
2023-12-30 19:32:51 +05:30 |
|
Krrish Dholakia
|
8ff3bbcfee
|
fix(proxy_server.py): router model group alias routing
check model alias group routing before specific deployment routing, to deal with an alias being the same as a deployment name (e.g. gpt-3.5-turbo)
n
|
2023-12-30 17:55:24 +05:30 |
|
Krrish Dholakia
|
027218c3f0
|
test(test_lowest_latency_routing.py): add more tests
|
2023-12-30 17:41:42 +05:30 |
|
Krrish Dholakia
|
f2d0d5584a
|
fix(router.py): fix latency based routing
|
2023-12-30 17:25:40 +05:30 |
|
Krrish Dholakia
|
c41b1418d4
|
test(test_router_init.py): fix test router init
|
2023-12-30 16:51:39 +05:30 |
|
Krrish Dholakia
|
3cb7acceaa
|
test(test_least_busy_routing.py): fix test
|
2023-12-30 16:12:52 +05:30 |
|
Krrish Dholakia
|
3935f99083
|
test(test_router.py): add retries
|
2023-12-30 15:54:46 +05:30 |
|
Krrish Dholakia
|
69935db239
|
fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue
|
2023-12-30 15:48:34 +05:30 |
|
Krrish Dholakia
|
b66cf0aa43
|
fix(lowest_tpm_rpm_routing.py): broaden scope of get deployment logic
|
2023-12-30 13:27:50 +05:30 |
|
Krrish Dholakia
|
a6719caebd
|
fix(aimage_generation): fix response type
|
2023-12-30 12:53:24 +05:30 |
|
Krrish Dholakia
|
750432457b
|
fix(openai.py): fix async image gen call
|
2023-12-30 12:44:54 +05:30 |
|
Krrish Dholakia
|
2acd086596
|
test(test_least_busy_routing.py): fix test init
|
2023-12-30 12:39:13 +05:30 |
|
ishaan-jaff
|
535a547b66
|
(fix) use cloudflare optional params
|
2023-12-30 12:22:31 +05:30 |
|