ishaan-jaff
|
96e8c2b4cf
|
(test) tg - ai - set max_tokens=5, fast test
|
2024-01-08 14:03:31 +05:30 |
|
ishaan-jaff
|
bf30e8fdb2
|
(test) router- verbose logs with fallbacks
|
2024-01-08 14:00:12 +05:30 |
|
ishaan-jaff
|
c5589e71e7
|
(docs) proxy, show users how to use detailed_debug
|
2024-01-08 12:58:23 +05:30 |
|
ishaan-jaff
|
6786e4f343
|
(feat) allow users to opt into detailed debug on proxy
|
2024-01-08 12:53:41 +05:30 |
|
ishaan-jaff
|
b4d9754dc2
|
(feat) verbose logs + fallbacks - working well
|
2024-01-08 12:33:09 +05:30 |
|
ishaan-jaff
|
f63f9d02cc
|
(feat) use '-debug' with proxy logger
|
2024-01-08 10:35:49 +05:30 |
|
ishaan-jaff
|
7e4f5e5fbd
|
(feat) log what model is being used as a fallback
|
2024-01-08 09:41:24 +05:30 |
|
ishaan-jaff
|
f9d75233de
|
(feat) move litellm router - to use logging.debug, logging.info
|
2024-01-08 09:31:29 +05:30 |
|
ishaan-jaff
|
119ff2fe05
|
(docs) show fallbacks on proxy_config
|
2024-01-08 08:54:10 +05:30 |
|
ishaan-jaff
|
ccd100fab3
|
(fix) improve logging when no fallbacks found
|
2024-01-08 08:53:40 +05:30 |
|
ishaan-jaff
|
7742950c57
|
v0 proxy logger
|
2024-01-08 08:25:04 +05:30 |
|
ishaan-jaff
|
b50b44f431
|
(fix) dockerfile
|
2024-01-08 08:08:51 +05:30 |
|
ishaan-jaff
|
e7c5a9e014
|
(fix) dockerfile merge conflicts
|
2024-01-08 08:02:50 +05:30 |
|
Krrish Dholakia
|
c04fa54d19
|
fix(utils.py): fix exception raised
|
2024-01-08 07:42:17 +05:30 |
|
Krrish Dholakia
|
3469b5b911
|
fix(utils.py): map optional params for gemini
|
2024-01-08 07:38:55 +05:30 |
|
Krrish Dholakia
|
79264b0dab
|
fix(gemini.py): better error handling
|
2024-01-08 07:32:26 +05:30 |
|
Krrish Dholakia
|
75177c2a15
|
bump: version 1.16.16 → 1.16.17
|
2024-01-08 07:16:37 +05:30 |
|
Krrish Dholakia
|
888e21e8e7
|
test(test_google_ai_studio_gemini.py): use an image url that will work on ci/cd
|
2024-01-06 22:58:37 +05:30 |
|
Krrish Dholakia
|
1507217725
|
fix(factory.py): more logging around the image loading for gemini
|
2024-01-06 22:50:44 +05:30 |
|
Krish Dholakia
|
439ee3bafc
|
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
|
2024-01-06 22:38:10 +05:30 |
|
Krrish Dholakia
|
0089a69aaf
|
bump: version 1.16.15 → 1.16.16
|
2024-01-06 22:36:33 +05:30 |
|
Krrish Dholakia
|
5fd2f945f3
|
fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
|
2024-01-06 22:36:22 +05:30 |
|
Krrish Dholakia
|
3577857ed1
|
fix(sagemaker.py): fix the post-call logging logic
|
2024-01-06 21:52:58 +05:30 |
|
Krrish Dholakia
|
f2ad13af65
|
fix(openai.py): fix image generation model dump
|
2024-01-06 17:55:32 +05:30 |
|
Krrish Dholakia
|
2d8d7e3569
|
perf(router.py): don't use asyncio.wait for - just pass it to the completion call for timeouts
|
2024-01-06 17:05:55 +05:30 |
|
Krrish Dholakia
|
712f89b4f1
|
fix(utils.py): handle original_response being a json
|
2024-01-06 17:02:50 +05:30 |
|
Krrish Dholakia
|
a7245dba07
|
build(Dockerfile): fixes the build time setup
|
2024-01-06 16:41:37 +05:30 |
|
ishaan-jaff
|
0f7c37355a
|
bump: version 1.16.14 → 1.16.15
|
2024-01-06 16:33:51 +05:30 |
|
ishaan-jaff
|
edac4130bb
|
(fix) s3 + os.environ/ cache test
|
2024-01-06 16:33:29 +05:30 |
|
ishaan-jaff
|
c222c0bfb8
|
(fix) proxy + cache - os.environ/ vars
|
2024-01-06 16:15:53 +05:30 |
|
ishaan-jaff
|
174248fc71
|
(test) add back test for counting stream completion tokens
|
2024-01-06 16:08:32 +05:30 |
|
Krish Dholakia
|
8d32f08858
|
Merge pull request #1342 from BerriAI/litellm_dockerfile_updates
build(Dockerfile): moves prisma logic to dockerfile
|
2024-01-06 16:03:25 +05:30 |
|
Krrish Dholakia
|
4e3750b017
|
build(Dockerfile): keep exposed port consistent
|
2024-01-06 16:01:59 +05:30 |
|
ishaan-jaff
|
f999b63d05
|
(test) using os.environ/ on cache + proxy
|
2024-01-06 15:54:50 +05:30 |
|
ishaan-jaff
|
c2b061acb2
|
(feat) cache+proxy - set os.environ/ on proxy config
|
2024-01-06 15:54:16 +05:30 |
|
Krrish Dholakia
|
9a4a96f46e
|
perf(azure+openai-files): use model_dump instead of json.loads + model_dump_json
|
2024-01-06 15:50:05 +05:30 |
|
ishaan-jaff
|
7611081d55
|
(docs) caching + proxy
|
2024-01-06 15:46:53 +05:30 |
|
ishaan-jaff
|
0d152b3748
|
(fix) cloudflare tests
|
2024-01-06 15:35:49 +05:30 |
|
ishaan-jaff
|
9002c06cd5
|
(fix) ci/cd use v prefix for container releases
|
2024-01-06 15:27:53 +05:30 |
|
Krrish Dholakia
|
13e8535b14
|
test(test_async_fn.py): skip cloudflare test - flaky
|
2024-01-06 15:21:10 +05:30 |
|
Krrish Dholakia
|
523d8e5977
|
build(Dockerfile): moves prisma logic to dockerfile
|
2024-01-06 15:21:10 +05:30 |
|
Krrish Dholakia
|
9375570547
|
test(test_async_fn.py): skip cloudflare test - flaky
|
2024-01-06 15:17:42 +05:30 |
|
Krrish Dholakia
|
7434f1a300
|
build(Dockerfile): moves prisma logic to dockerfile
|
2024-01-06 14:59:10 +05:30 |
|
Krrish Dholakia
|
2fb922a469
|
ci(config.yml): have circle ci run on non-main branches (supports litellm_* )
|
2024-01-06 14:50:07 +05:30 |
|
ishaan-jaff
|
6011c5c8c2
|
(fix) undo changes that led were trying to control prisma connections
|
2024-01-06 14:32:40 +05:30 |
|
Krrish Dholakia
|
04c04d62e3
|
test(test_stream_chunk_builder.py): remove completion assert, the test is for prompt tokens
|
2024-01-06 14:12:44 +05:30 |
|
Krrish Dholakia
|
5c45e69a5e
|
test(test_proxy_server_keys.py): add logic for connecting/disconnecting from http server
|
2024-01-06 14:09:10 +05:30 |
|
Krrish Dholakia
|
b51d98c6e3
|
docs: fix pip install litellm[proxy] instruction
|
2024-01-06 13:49:15 +05:30 |
|
Krrish Dholakia
|
bf56179da8
|
fix(proxy/utils.py): increase http connection pool for prisma
|
2024-01-06 13:45:30 +05:30 |
|
ishaan-jaff
|
4a076350cc
|
(ci/cd) move to old version of test_proxy_server_keys.py
|
2024-01-06 13:03:12 +05:30 |
|