Ishaan Jaff
|
9cb5a2bec0
|
Merge pull request #1290 from fcakyon/patch-1
fix typos & add missing names for azure models
|
2024-01-01 17:58:17 +05:30 |
|
Krrish Dholakia
|
e1e3721917
|
build(user.py): fix page param read issue
|
2024-01-01 17:25:52 +05:30 |
|
Krrish Dholakia
|
a41e56a730
|
fix(proxy_server.py): enabling user auth via ui
https://github.com/BerriAI/litellm/issues/1231
|
2024-01-01 17:14:24 +05:30 |
|
fatih
|
6566ebd815
|
update azure turbo namings
|
2024-01-01 13:03:08 +03:00 |
|
Krrish Dholakia
|
ca40a88987
|
fix(proxy_server.py): check if user email in user db
|
2024-01-01 14:19:59 +05:30 |
|
ishaan-jaff
|
7623c1a846
|
(feat) proxy - only use print_verbose
|
2024-01-01 13:52:11 +05:30 |
|
ishaan-jaff
|
84cfa1c42a
|
(test) ci/cd
|
2024-01-01 13:51:27 +05:30 |
|
Krrish Dholakia
|
24e7dc359d
|
feat(proxy_server.py): introduces new /user/auth endpoint for handling user email auth
decouples streamlit ui from proxy server. this then requires the proxy to handle user auth separately.
|
2024-01-01 13:44:47 +05:30 |
|
ishaan-jaff
|
52db2a6040
|
(feat) proxy - remove streamlit ui on startup
|
2024-01-01 12:54:23 +05:30 |
|
ishaan-jaff
|
c8f8bd9e57
|
(test) proxy - log metadata to langfuse
|
2024-01-01 11:54:16 +05:30 |
|
ishaan-jaff
|
694956b44e
|
(test) proxy - pass metadata to openai client
|
2024-01-01 11:12:57 +05:30 |
|
ishaan-jaff
|
dacd86030b
|
(fix) proxy - remove extra print statemet
|
2024-01-01 10:52:09 +05:30 |
|
ishaan-jaff
|
16fb83e007
|
(fix) proxy - remove errant print statement
|
2024-01-01 10:48:12 +05:30 |
|
ishaan-jaff
|
84fbc903aa
|
(test) langfuse - set custom trace_id
|
2023-12-30 20:19:22 +05:30 |
|
ishaan-jaff
|
8ae4554a8a
|
(feat) langfuse - set custom trace_id, trace_user_id
|
2023-12-30 20:19:03 +05:30 |
|
ishaan-jaff
|
cc7b964433
|
(docs) add litellm.cache docstring
|
2023-12-30 20:04:08 +05:30 |
|
ishaan-jaff
|
70cdc16d6f
|
(feat) cache context manager - update cache
|
2023-12-30 19:50:53 +05:30 |
|
ishaan-jaff
|
e35f17ca3c
|
(test) caching - context managers
|
2023-12-30 19:33:47 +05:30 |
|
ishaan-jaff
|
ddddfe6602
|
(feat) add cache context manager
|
2023-12-30 19:32:51 +05:30 |
|
Krrish Dholakia
|
8ff3bbcfee
|
fix(proxy_server.py): router model group alias routing
check model alias group routing before specific deployment routing, to deal with an alias being the same as a deployment name (e.g. gpt-3.5-turbo)
n
|
2023-12-30 17:55:24 +05:30 |
|
Krrish Dholakia
|
027218c3f0
|
test(test_lowest_latency_routing.py): add more tests
|
2023-12-30 17:41:42 +05:30 |
|
Krrish Dholakia
|
f2d0d5584a
|
fix(router.py): fix latency based routing
|
2023-12-30 17:25:40 +05:30 |
|
Krrish Dholakia
|
c41b1418d4
|
test(test_router_init.py): fix test router init
|
2023-12-30 16:51:39 +05:30 |
|
Krrish Dholakia
|
3cb7acceaa
|
test(test_least_busy_routing.py): fix test
|
2023-12-30 16:12:52 +05:30 |
|
Krrish Dholakia
|
3935f99083
|
test(test_router.py): add retries
|
2023-12-30 15:54:46 +05:30 |
|
Krrish Dholakia
|
69935db239
|
fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue
|
2023-12-30 15:48:34 +05:30 |
|
Krrish Dholakia
|
b66cf0aa43
|
fix(lowest_tpm_rpm_routing.py): broaden scope of get deployment logic
|
2023-12-30 13:27:50 +05:30 |
|
Krrish Dholakia
|
a6719caebd
|
fix(aimage_generation): fix response type
|
2023-12-30 12:53:24 +05:30 |
|
Krrish Dholakia
|
750432457b
|
fix(openai.py): fix async image gen call
|
2023-12-30 12:44:54 +05:30 |
|
Krrish Dholakia
|
2acd086596
|
test(test_least_busy_routing.py): fix test init
|
2023-12-30 12:39:13 +05:30 |
|
ishaan-jaff
|
535a547b66
|
(fix) use cloudflare optional params
|
2023-12-30 12:22:31 +05:30 |
|
Krrish Dholakia
|
c33c1d85bb
|
fix: support dynamic timeouts for openai and azure
|
2023-12-30 12:14:02 +05:30 |
|
Krrish Dholakia
|
77be3e3114
|
fix(main.py): don't set timeout as an optional api param
|
2023-12-30 11:47:07 +05:30 |
|
ishaan-jaff
|
aee38d9329
|
(fix) batch_completions - set default timeout
|
2023-12-30 11:35:55 +05:30 |
|
Krrish Dholakia
|
38f55249e1
|
fix(router.py): support retry and fallbacks for atext_completion
|
2023-12-30 11:19:32 +05:30 |
|
ishaan-jaff
|
5d6954895f
|
(fix) timeout optional param
|
2023-12-30 11:07:52 +05:30 |
|
ishaan-jaff
|
523415cb0c
|
(test) dynamic timeout on router
|
2023-12-30 10:56:07 +05:30 |
|
ishaan-jaff
|
2f4cd3b569
|
(feat) proxy - support dynamic timeout per request
|
2023-12-30 10:55:42 +05:30 |
|
ishaan-jaff
|
459ba5b45e
|
(feat) router, add ModelResponse type hints
|
2023-12-30 10:44:13 +05:30 |
|
Krrish Dholakia
|
a34de56289
|
fix(router.py): handle initial scenario for tpm/rpm routing
|
2023-12-30 07:28:45 +05:30 |
|
Marmik Pandya
|
1426594d3f
|
add support for mistral json mode via anyscale
|
2023-12-29 22:26:22 +05:30 |
|
Krrish Dholakia
|
2fc264ca04
|
fix(router.py): fix int logic
|
2023-12-29 20:41:56 +05:30 |
|
Krrish Dholakia
|
cf91e49c87
|
refactor(lowest_tpm_rpm.py): move tpm/rpm based routing to a separate file for better testing
|
2023-12-29 18:33:43 +05:30 |
|
Krrish Dholakia
|
54d7bc2cc3
|
test(test_least_busy_router.py): add better testing for least busy routing
|
2023-12-29 17:16:00 +05:30 |
|
Krrish Dholakia
|
678bbfa9be
|
fix(least_busy.py): support consistent use of model id instead of deployment name
|
2023-12-29 17:05:26 +05:30 |
|
ishaan-jaff
|
06e4b301b4
|
(test) gemini-pro-vision cost tracking
|
2023-12-29 16:31:28 +05:30 |
|
ishaan-jaff
|
739d9e7a78
|
(fix) vertex ai - use usage from response
|
2023-12-29 16:30:25 +05:30 |
|
ishaan-jaff
|
e6a7212d10
|
(fix) counting streaming prompt tokens - azure
|
2023-12-29 16:13:52 +05:30 |
|
ishaan-jaff
|
8c03be59a8
|
(fix) token_counter for tool calling
|
2023-12-29 15:54:03 +05:30 |
|
ishaan-jaff
|
73f60b7315
|
(test) stream chunk builder - azure prompt tokens
|
2023-12-29 15:45:41 +05:30 |
|