Krrish Dholakia
|
4ff4180a53
|
build(Dockerfile.database): fixing build issues
|
2024-01-08 19:31:19 +05:30 |
|
Krish Dholakia
|
6b3cf217a4
|
Update ghcr_deploy.yml
|
2024-01-08 18:16:46 +05:30 |
|
Krrish Dholakia
|
2bcfe28ee9
|
fix(proxy_server.py): improve /health/readiness endpoint to give more details on connected services
|
2024-01-08 17:45:00 +05:30 |
|
Krrish Dholakia
|
9470c7d9b8
|
build(Dockerfile): new dockerfile with prisma db setup
not many services allow you to pass docker build args, so we needed another way of setting this
|
2024-01-08 17:41:19 +05:30 |
|
ishaan-jaff
|
fa74831d79
|
(docs) control proxy debug using env vars
|
2024-01-08 16:15:33 +05:30 |
|
ishaan-jaff
|
5d7646b30a
|
(fix) proxy - show detailed_debug logs
|
2024-01-08 15:34:24 +05:30 |
|
Krrish Dholakia
|
dd78782133
|
fix(utils.py): error handling for litellm --model mistral edge case
|
2024-01-08 15:09:01 +05:30 |
|
Krrish Dholakia
|
1ca7747371
|
fix(router.py): azure client init fix
|
2024-01-08 14:56:57 +05:30 |
|
Krrish Dholakia
|
1a480b3bd2
|
refactor: trigger dockerbuild
|
2024-01-08 14:42:28 +05:30 |
|
Ishaan Jaff
|
a70626d6e9
|
Merge pull request #1356 from BerriAI/litellm_improve_proxy_logs
[Feat] Improve Proxy Logging
|
2024-01-08 14:41:01 +05:30 |
|
Krrish Dholakia
|
ec83243521
|
fix(router.py): increasing connection pool limits for azure router
|
2024-01-08 14:39:49 +05:30 |
|
ishaan-jaff
|
96e8c2b4cf
|
(test) tg - ai - set max_tokens=5, fast test
|
2024-01-08 14:03:31 +05:30 |
|
ishaan-jaff
|
bf30e8fdb2
|
(test) router- verbose logs with fallbacks
|
2024-01-08 14:00:12 +05:30 |
|
ishaan-jaff
|
c5589e71e7
|
(docs) proxy, show users how to use detailed_debug
|
2024-01-08 12:58:23 +05:30 |
|
ishaan-jaff
|
6786e4f343
|
(feat) allow users to opt into detailed debug on proxy
|
2024-01-08 12:53:41 +05:30 |
|
Krrish Dholakia
|
f07daa3780
|
bump: version 1.16.18 → 1.16.19
|
2024-01-08 12:43:29 +05:30 |
|
Krrish Dholakia
|
6333fbfe56
|
fix(main.py): support cost calculation for text completion streaming object
|
2024-01-08 12:41:43 +05:30 |
|
ishaan-jaff
|
b4d9754dc2
|
(feat) verbose logs + fallbacks - working well
|
2024-01-08 12:33:09 +05:30 |
|
Krish Dholakia
|
442ebdde7c
|
Update ghcr_deploy.yml
|
2024-01-08 12:22:30 +05:30 |
|
Krrish Dholakia
|
9b46412279
|
fix(utils.py): fix logging for text completion streaming
|
2024-01-08 12:05:28 +05:30 |
|
Krish Dholakia
|
b4d624f332
|
Update ghcr_deploy.yml
always update latest tag
|
2024-01-08 11:47:12 +05:30 |
|
Krrish Dholakia
|
3d0ea08f77
|
refactor(gemini.py): fix linting issue
|
2024-01-08 11:43:33 +05:30 |
|
Krrish Dholakia
|
e70a5a8970
|
bump: version 1.16.17 → 1.16.18
|
2024-01-08 11:41:13 +05:30 |
|
Krrish Dholakia
|
b1fd0a164b
|
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
|
2024-01-08 11:40:56 +05:30 |
|
Krrish Dholakia
|
c720870f80
|
docs(gemini.md,-deploy.md): doc updates
|
2024-01-08 11:02:12 +05:30 |
|
Krish Dholakia
|
4ea3e778f7
|
Merge pull request #1315 from spdustin/feature_allow_claude_prefill
Adds "pre-fill" support for Claude
|
2024-01-08 10:48:15 +05:30 |
|
Krrish Dholakia
|
83306fe0c9
|
build(Dockerfile): fix pr merge issues
|
2024-01-08 10:38:19 +05:30 |
|
ishaan-jaff
|
f63f9d02cc
|
(feat) use '-debug' with proxy logger
|
2024-01-08 10:35:49 +05:30 |
|
Ishaan Jaff
|
5cfcd42763
|
Merge pull request #1311 from Manouchehri/patch-5
(caching) improve s3 backend
|
2024-01-08 09:47:57 +05:30 |
|
ishaan-jaff
|
7e4f5e5fbd
|
(feat) log what model is being used as a fallback
|
2024-01-08 09:41:24 +05:30 |
|
ishaan-jaff
|
f9d75233de
|
(feat) move litellm router - to use logging.debug, logging.info
|
2024-01-08 09:31:29 +05:30 |
|
ishaan-jaff
|
119ff2fe05
|
(docs) show fallbacks on proxy_config
|
2024-01-08 08:54:10 +05:30 |
|
ishaan-jaff
|
ccd100fab3
|
(fix) improve logging when no fallbacks found
|
2024-01-08 08:53:40 +05:30 |
|
ishaan-jaff
|
7742950c57
|
v0 proxy logger
|
2024-01-08 08:25:04 +05:30 |
|
ishaan-jaff
|
b50b44f431
|
(fix) dockerfile
|
2024-01-08 08:08:51 +05:30 |
|
ishaan-jaff
|
e7c5a9e014
|
(fix) dockerfile merge conflicts
|
2024-01-08 08:02:50 +05:30 |
|
Krrish Dholakia
|
c04fa54d19
|
fix(utils.py): fix exception raised
|
2024-01-08 07:42:17 +05:30 |
|
Krrish Dholakia
|
3469b5b911
|
fix(utils.py): map optional params for gemini
|
2024-01-08 07:38:55 +05:30 |
|
Krrish Dholakia
|
79264b0dab
|
fix(gemini.py): better error handling
|
2024-01-08 07:32:26 +05:30 |
|
Krrish Dholakia
|
75177c2a15
|
bump: version 1.16.16 → 1.16.17
|
2024-01-08 07:16:37 +05:30 |
|
David Manouchehri
|
56b03732ae
|
(caching) Set Content-Disposition header and Content-Language
|
2024-01-07 12:21:15 -05:00 |
|
Krrish Dholakia
|
888e21e8e7
|
test(test_google_ai_studio_gemini.py): use an image url that will work on ci/cd
|
2024-01-06 22:58:37 +05:30 |
|
Krrish Dholakia
|
1507217725
|
fix(factory.py): more logging around the image loading for gemini
|
2024-01-06 22:50:44 +05:30 |
|
Krish Dholakia
|
439ee3bafc
|
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
|
2024-01-06 22:38:10 +05:30 |
|
Krrish Dholakia
|
0089a69aaf
|
bump: version 1.16.15 → 1.16.16
|
2024-01-06 22:36:33 +05:30 |
|
Krrish Dholakia
|
5fd2f945f3
|
fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
|
2024-01-06 22:36:22 +05:30 |
|
Krrish Dholakia
|
3577857ed1
|
fix(sagemaker.py): fix the post-call logging logic
|
2024-01-06 21:52:58 +05:30 |
|
Krrish Dholakia
|
f2ad13af65
|
fix(openai.py): fix image generation model dump
|
2024-01-06 17:55:32 +05:30 |
|
Krrish Dholakia
|
2d8d7e3569
|
perf(router.py): don't use asyncio.wait for - just pass it to the completion call for timeouts
|
2024-01-06 17:05:55 +05:30 |
|
Krrish Dholakia
|
712f89b4f1
|
fix(utils.py): handle original_response being a json
|
2024-01-06 17:02:50 +05:30 |
|