Commit graph

3189 commits

Author SHA1 Message Date
ishaan-jaff
8ae4554a8a (feat) langfuse - set custom trace_id, trace_user_id 2023-12-30 20:19:03 +05:30
ishaan-jaff
cc7b964433 (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
ishaan-jaff
70cdc16d6f (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
e35f17ca3c (test) caching - context managers 2023-12-30 19:33:47 +05:30
ishaan-jaff
ddddfe6602 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
8ff3bbcfee fix(proxy_server.py): router model group alias routing
check model alias group routing before specific deployment routing, to deal with an alias being the same as a deployment name (e.g. gpt-3.5-turbo)

n
2023-12-30 17:55:24 +05:30
Krrish Dholakia
027218c3f0 test(test_lowest_latency_routing.py): add more tests 2023-12-30 17:41:42 +05:30
Krrish Dholakia
f2d0d5584a fix(router.py): fix latency based routing 2023-12-30 17:25:40 +05:30
Krrish Dholakia
c41b1418d4 test(test_router_init.py): fix test router init 2023-12-30 16:51:39 +05:30
Krrish Dholakia
3cb7acceaa test(test_least_busy_routing.py): fix test 2023-12-30 16:12:52 +05:30
Krrish Dholakia
3935f99083 test(test_router.py): add retries 2023-12-30 15:54:46 +05:30
Krrish Dholakia
69935db239 fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue 2023-12-30 15:48:34 +05:30
Krrish Dholakia
b66cf0aa43 fix(lowest_tpm_rpm_routing.py): broaden scope of get deployment logic 2023-12-30 13:27:50 +05:30
Krrish Dholakia
a6719caebd fix(aimage_generation): fix response type 2023-12-30 12:53:24 +05:30
Krrish Dholakia
750432457b fix(openai.py): fix async image gen call 2023-12-30 12:44:54 +05:30
Krrish Dholakia
2acd086596 test(test_least_busy_routing.py): fix test init 2023-12-30 12:39:13 +05:30
ishaan-jaff
535a547b66 (fix) use cloudflare optional params 2023-12-30 12:22:31 +05:30
Krrish Dholakia
c33c1d85bb fix: support dynamic timeouts for openai and azure 2023-12-30 12:14:02 +05:30
Krrish Dholakia
77be3e3114 fix(main.py): don't set timeout as an optional api param 2023-12-30 11:47:07 +05:30
ishaan-jaff
aee38d9329 (fix) batch_completions - set default timeout 2023-12-30 11:35:55 +05:30
Krrish Dholakia
38f55249e1 fix(router.py): support retry and fallbacks for atext_completion 2023-12-30 11:19:32 +05:30
ishaan-jaff
5d6954895f (fix) timeout optional param 2023-12-30 11:07:52 +05:30
ishaan-jaff
523415cb0c (test) dynamic timeout on router 2023-12-30 10:56:07 +05:30
ishaan-jaff
2f4cd3b569 (feat) proxy - support dynamic timeout per request 2023-12-30 10:55:42 +05:30
ishaan-jaff
459ba5b45e (feat) router, add ModelResponse type hints 2023-12-30 10:44:13 +05:30
Krrish Dholakia
a34de56289 fix(router.py): handle initial scenario for tpm/rpm routing 2023-12-30 07:28:45 +05:30
Marmik Pandya
1426594d3f add support for mistral json mode via anyscale 2023-12-29 22:26:22 +05:30
Krrish Dholakia
2fc264ca04 fix(router.py): fix int logic 2023-12-29 20:41:56 +05:30
Krrish Dholakia
cf91e49c87 refactor(lowest_tpm_rpm.py): move tpm/rpm based routing to a separate file for better testing 2023-12-29 18:33:43 +05:30
Krrish Dholakia
54d7bc2cc3 test(test_least_busy_router.py): add better testing for least busy routing 2023-12-29 17:16:00 +05:30
Krrish Dholakia
678bbfa9be fix(least_busy.py): support consistent use of model id instead of deployment name 2023-12-29 17:05:26 +05:30
ishaan-jaff
06e4b301b4 (test) gemini-pro-vision cost tracking 2023-12-29 16:31:28 +05:30
ishaan-jaff
739d9e7a78 (fix) vertex ai - use usage from response 2023-12-29 16:30:25 +05:30
ishaan-jaff
e6a7212d10 (fix) counting streaming prompt tokens - azure 2023-12-29 16:13:52 +05:30
ishaan-jaff
8c03be59a8 (fix) token_counter for tool calling 2023-12-29 15:54:03 +05:30
ishaan-jaff
73f60b7315 (test) stream chunk builder - azure prompt tokens 2023-12-29 15:45:41 +05:30
ishaan-jaff
b1077ebc38 (test) test_token_counter_azure 2023-12-29 15:37:46 +05:30
ishaan-jaff
037dcbbe10 (fix) use openai token counter for azure llms 2023-12-29 15:37:46 +05:30
Krrish Dholakia
cbdfae1267 fix(router.py): support wait_for for async completion calls 2023-12-29 15:27:20 +05:30
Krrish Dholakia
4882325c35 feat(router.py): support 'retry_after' param, to set min timeout before retrying a failed request (default 0) 2023-12-29 15:18:28 +05:30
ishaan-jaff
4a028d012a (test) token_counter - prompt tokens == tokens from API 2023-12-29 15:15:39 +05:30
ishaan-jaff
a300ab9152 (feat) azure stream - count correct prompt tokens 2023-12-29 15:15:39 +05:30
Krrish Dholakia
1e07f0fce8 fix(caching.py): hash the cache key to prevent key too long errors 2023-12-29 15:03:33 +05:30
Krrish Dholakia
6e68cd1125 docs(load_test.md): add litellm load test script to docs 2023-12-29 13:41:44 +05:30
ishaan-jaff
3973b9c8e4 (feat) cloudflare - add exception mapping 2023-12-29 12:31:10 +05:30
ishaan-jaff
243ad31e90 (test) async + stream clooudflare 2023-12-29 12:03:29 +05:30
ishaan-jaff
ee682be093 (feat) add cloudflare streaming 2023-12-29 12:01:26 +05:30
ishaan-jaff
a999e80b46 (test) async cloudflare 2023-12-29 11:50:09 +05:30
ishaan-jaff
dde6bc4fb6 (feat) cloudflare - add optional params 2023-12-29 11:50:09 +05:30
Krrish Dholakia
e06840b571 refactor: move async text completion testing to test_text_completion.py 2023-12-29 11:46:40 +05:30