Commit graph

33 commits

Author SHA1 Message Date
Krrish Dholakia
2874b94fb1 refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
Krrish Dholakia
e391e30285 refactor: replace 'traceback.print_exc()' with logging library
allows error logs to be in json format for otel logging
2024-06-06 13:47:43 -07:00
Krrish Dholakia
bcc07afd04 fix(lowest_latency.py): set default none value for time_to_first_token in sync log success event 2024-05-21 18:42:15 -07:00
Krrish Dholakia
f007bf7e21 feat(lowest_latency.py): route by time to first token, for streaming requests (if available)
Closes https://github.com/BerriAI/litellm/issues/3574
2024-05-21 13:08:17 -07:00
Krrish Dholakia
84db63e3dd fix(lowest_latency.py): allow ttl to be a float 2024-05-15 09:59:21 -07:00
Rahul Kataria
be4450106d Remove duplicate code in router_strategy 2024-05-12 18:05:57 +05:30
Krrish Dholakia
926b86af87 feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls 2024-05-11 13:43:08 -07:00
Krrish Dholakia
5f93cae3ff feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
Krrish Dholakia
cb88ed4df8 fix(lowest_latency.py): fix the size of the latency list to 10 by default (can be modified) 2024-05-03 09:00:32 -07:00
Krrish Dholakia
7ae28bfcc9 fix(lowest_latency.py): allow setting a buffer for getting values within a certain latency threshold
if an endpoint is slow - it's completion time might not be updated till the call is completed. This prevents us from overloading those endpoints, in a simple way.
2024-04-30 12:00:26 -07:00
Ishaan Jaff
d4a0530d02 fix - lowest latency routing 2024-04-29 16:02:57 -07:00
Ishaan Jaff
2a49580b5b fix lowest latency - routing 2024-04-29 15:51:52 -07:00
Ishaan Jaff
7306072d33 fix debugging lowest latency router 2024-04-25 19:34:28 -07:00
Ishaan Jaff
3ab5e687f6 fix better debugging for latency 2024-04-25 11:35:08 -07:00
Ishaan Jaff
4931514330 fix 2024-04-25 11:25:03 -07:00
Ishaan Jaff
3b9d6dfc47 temp - show better debug logs for lowest latency 2024-04-25 11:22:52 -07:00
Ishaan Jaff
a26ecbad97 fix - increase default penalty for lowest latency 2024-04-25 07:54:25 -07:00
Ishaan Jaff
5dae1cf303 fix - set latency stats in kwargs 2024-04-24 20:13:45 -07:00
Ishaan Jaff
654c736d29 feat - penalize timeout errors 2024-04-24 16:35:00 -07:00
Krish Dholakia
b8d285d120 Merge pull request #2798 from CLARKBENHAM/main
add test for rate limits - Router isn't coroutine safe
2024-04-06 08:47:40 -07:00
Krrish Dholakia
48a5948081 fix(router.py): handle id being passed in as int 2024-04-04 14:23:10 -07:00
CLARKBENHAM
1c93ebf05a undo black formating 2024-04-02 19:53:48 -07:00
CLARKBENHAM
2dd0c32612 fix lowest latency tests 2024-04-02 19:10:40 -07:00
Krrish Dholakia
afaee375e6 fix(lowest_latency.py): consistent time calc 2024-02-14 15:03:35 -08:00
stephenleo
a6f24acb8b fix latency calc (lower better) 2024-02-11 17:06:46 +08:00
Krrish Dholakia
ae9b8f50e0 fix(lowest_latency.py): fix merge issue 2024-01-10 21:37:46 +05:30
Krish Dholakia
e635ca2151 Merge branch 'main' into litellm_latency_routing_updates 2024-01-10 21:33:54 +05:30
Krrish Dholakia
7df19b2f7c fix(router.py): allow user to control the latency routing time window 2024-01-10 20:56:52 +05:30
Krrish Dholakia
f288b12411 fix(lowest_latency.py): add back tpm/rpm checks, configurable time window 2024-01-10 20:52:01 +05:30
Krrish Dholakia
b5ec5eb10b refactor(lowest_latency.py): fix linting error 2024-01-09 09:51:43 +05:30
Krrish Dholakia
fb9ebfbedd feat(lowest_latency.py): support expanded time window for latency based routing
uses a 1hr avg. of latency for deployments, to determine which to route to

https://github.com/BerriAI/litellm/issues/1361
2024-01-09 09:38:04 +05:30
Krrish Dholakia
d3dee9b20c test(test_lowest_latency_routing.py): add more tests 2023-12-30 17:41:42 +05:30
Krrish Dholakia
25ee96271e fix(router.py): fix latency based routing 2023-12-30 17:25:40 +05:30