ishaan-jaff
|
2ce4153ace
|
(test) test_upperbound_key_params
|
2024-02-06 13:17:57 -08:00 |
|
ishaan-jaff
|
0871327ff0
|
(feat) upperbound_key_generate_params
|
2024-02-06 13:17:57 -08:00 |
|
ishaan-jaff
|
d09aa560f3
|
(docs) upperbound_key_generate_params
|
2024-02-06 13:17:57 -08:00 |
|
Krrish Dholakia
|
f8380c638f
|
fix(langfuse.py): support logging failed llm api calls to langfuse
|
2024-02-06 13:17:57 -08:00 |
|
Krrish Dholakia
|
67dce555ec
|
fix(utils.py): round max tokens to be int always
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
38445ca508
|
(ci/cd) run again
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
7c9ada1b19
|
(ci/cd) run again
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
47e056d2ea
|
(ci/cd) fix test_config_no_auth
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
adb67da026
|
(fix) test_normal_router_tpm_limit
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
bdb9fc8f20
|
(fix) parallel_request_limiter debug
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
697109b7ec
|
(ci/cd) run again
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
61e5f2a79b
|
(fix) proxy_startup test
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
d94d925c3b
|
(fix) rename proxy startup test
|
2024-02-06 13:17:26 -08:00 |
|
ishaan-jaff
|
ac31fe0081
|
(ci/cd) run in verbose mode
|
2024-02-06 13:17:26 -08:00 |
|
Krrish Dholakia
|
7055793609
|
fix(ollama.py): support format for ollama
|
2024-02-06 13:17:26 -08:00 |
|
Krrish Dholakia
|
b7f1bd696e
|
fix(utils.py): round max tokens to be int always
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
eb779c61c6
|
(ci/cd) run again
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
2a7b07ffca
|
(ci/cd) run again
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
79f444edda
|
(ci/cd) fix test_config_no_auth
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
9355fc62a7
|
(fix) test_normal_router_tpm_limit
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
b4372457c4
|
(fix) parallel_request_limiter debug
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
84942bb694
|
(ci/cd) run again
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
0719c32f9e
|
(fix) proxy_startup test
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
acf2474d68
|
(fix) rename proxy startup test
|
2024-02-06 13:15:51 -08:00 |
|
ishaan-jaff
|
9ed1d033fe
|
(ci/cd) run in verbose mode
|
2024-02-06 13:15:51 -08:00 |
|
Ishaan Jaff
|
8119f547ef
|
Merge pull request #1857 from BerriAI/litellm_improve_logging_langfuse_cache_hits
[FEAT] show langfuse logging / cache tags better through proxy
|
2024-02-06 13:15:09 -08:00 |
|
Krrish Dholakia
|
22913945b0
|
fix(utils.py): round max tokens to be int always
|
2024-02-06 13:10:52 -08:00 |
|
ishaan-jaff
|
3da30383f5
|
(feat) show langfuse logging tags better through proxy
|
2024-02-06 13:09:48 -08:00 |
|
ishaan-jaff
|
b28de5e329
|
(ci/cd) run again
|
2024-02-06 13:02:36 -08:00 |
|
Krrish Dholakia
|
e5fec98e1f
|
fix(proxy_server.py): do a health check on db before returning if proxy ready (if db connected)
|
2024-02-06 12:57:05 -08:00 |
|
ishaan-jaff
|
8c0f912780
|
(ci/cd) run again
|
2024-02-06 12:53:47 -08:00 |
|
ishaan-jaff
|
a1c34ac9ec
|
(ci/cd) fix test_config_no_auth
|
2024-02-06 12:47:19 -08:00 |
|
ishaan-jaff
|
1489fd369b
|
(fix) test_normal_router_tpm_limit
|
2024-02-06 12:44:30 -08:00 |
|
ishaan-jaff
|
13fe72d6d5
|
(fix) parallel_request_limiter debug
|
2024-02-06 12:43:28 -08:00 |
|
ishaan-jaff
|
506c14b896
|
(ci/cd) run again
|
2024-02-06 12:22:24 -08:00 |
|
ishaan-jaff
|
29303e979e
|
(fix) proxy_startup test
|
2024-02-06 11:38:57 -08:00 |
|
ishaan-jaff
|
4099340ecb
|
(fix) rename proxy startup test
|
2024-02-06 11:27:24 -08:00 |
|
Ishaan Jaff
|
7cb69c72c8
|
Merge branch 'main' into litellm_add_semantic_cache
|
2024-02-06 11:18:43 -08:00 |
|
ishaan-jaff
|
8175fb4deb
|
(fix) mark semantic caching as beta test
|
2024-02-06 11:04:19 -08:00 |
|
ishaan-jaff
|
405a44727c
|
(ci/cd) run in verbose mode
|
2024-02-06 10:57:20 -08:00 |
|
ishaan-jaff
|
1afdf5cf36
|
(fix) semantic caching
|
2024-02-06 10:55:15 -08:00 |
|
ishaan-jaff
|
c8a83bb745
|
(fix) test-semantic caching
|
2024-02-06 10:39:44 -08:00 |
|
ishaan-jaff
|
2732c47b70
|
(feat) redis-semantic cache on proxy
|
2024-02-06 10:35:21 -08:00 |
|
ishaan-jaff
|
a1fc1e49c7
|
(fix) use semantic cache on proxy
|
2024-02-06 10:27:33 -08:00 |
|
ishaan-jaff
|
05f379234d
|
allow setting redis_semantic cache_embedding model
|
2024-02-06 10:22:02 -08:00 |
|
Krrish Dholakia
|
d1db67890c
|
fix(ollama.py): support format for ollama
|
2024-02-06 10:11:52 -08:00 |
|
ishaan-jaff
|
751fb1af89
|
(feat) log semantic_sim to langfuse
|
2024-02-06 09:31:57 -08:00 |
|
Krrish Dholakia
|
3afa5230d6
|
fix(utils.py): return finish reason for last vertex ai chunk
|
2024-02-06 09:21:03 -08:00 |
|
ishaan-jaff
|
70a895329e
|
(feat) working semantic cache on proxy
|
2024-02-06 08:55:25 -08:00 |
|
ishaan-jaff
|
a3b1e3bc84
|
(feat) redis-semantic cache
|
2024-02-06 08:54:36 -08:00 |
|