Krrish Dholakia
|
a35f4272f4
|
refactor(lowest_latency.py): fix linting error
|
2024-01-09 09:51:43 +05:30 |
|
Krrish Dholakia
|
88d498a54a
|
fix(ollama.py): use tiktoken as backup for prompt token counting
|
2024-01-09 09:47:18 +05:30 |
|
Krrish Dholakia
|
11b6c66609
|
docs(gemini.md): fix docs
|
2024-01-09 09:38:04 +05:30 |
|
Krrish Dholakia
|
a5147f9e06
|
feat(lowest_latency.py): support expanded time window for latency based routing
uses a 1hr avg. of latency for deployments, to determine which to route to
https://github.com/BerriAI/litellm/issues/1361
|
2024-01-09 09:38:04 +05:30 |
|
Krish Dholakia
|
8225eda0cc
|
Merge pull request #1370 from haseeb-heaven/main
Updated Gemini AI Documentation
|
2024-01-09 09:36:54 +05:30 |
|
HeavenHM
|
fa9d0faf13
|
Update gemini.md
Added example for Gemini Vision Pro
|
2024-01-09 08:00:51 +05:30 |
|
Mateo Cámara
|
3bc8a03150
|
Merge remote-tracking branch 'origin/main'
# Conflicts:
# litellm/main.py
|
2024-01-08 18:51:46 +01:00 |
|
Krrish Dholakia
|
5b7c3c7187
|
refactor(lowest_latency.py): fix linting issue
|
2024-01-08 23:07:43 +05:30 |
|
Krrish Dholakia
|
a60e23d98a
|
feat(lowest_latency.py): support expanded time window for latency based routing
uses a 1hr avg. of latency for deployments, to determine which to route to
https://github.com/BerriAI/litellm/issues/1361
|
2024-01-08 22:52:32 +05:30 |
|
Ishaan Jaff
|
8a85b719f7
|
Merge pull request #1368 from deepinfra/udpate-models-2
Update deepinfra models
|
2024-01-08 22:46:14 +05:30 |
|
ishaan-jaff
|
6263103680
|
(ci/cd) run again
|
2024-01-08 22:42:31 +05:30 |
|
Iskren Chernev
|
2486f92523
|
Update deepinfra models
|
2024-01-08 18:54:15 +02:00 |
|
Krrish Dholakia
|
edc088f038
|
build(Dockerfile): pip install from wheels not re-install requirements.txt
reduce size of dockerbuild
n
|
2024-01-08 20:26:09 +05:30 |
|
Krrish Dholakia
|
8edd3fe651
|
test(test_proxy_startup.py): fix gunicorn test
|
2024-01-08 19:55:18 +05:30 |
|
Krish Dholakia
|
59c57f84cf
|
Update README.md
|
2024-01-08 19:49:43 +05:30 |
|
Krrish Dholakia
|
55e70aa93a
|
bump: version 1.16.19 → 1.16.20
|
2024-01-08 19:47:10 +05:30 |
|
Krish Dholakia
|
e949a2ada3
|
Merge pull request #1367 from BerriAI/litellm_proxy_startup
fix(proxy_server.py): add support for passing in config file via worker_config directly + testing
|
2024-01-08 19:46:48 +05:30 |
|
Krrish Dholakia
|
750330509e
|
build(Dockerfile.database): fix new dockerfile
|
2024-01-08 19:37:34 +05:30 |
|
Krrish Dholakia
|
4ff4180a53
|
build(Dockerfile.database): fixing build issues
|
2024-01-08 19:31:19 +05:30 |
|
Krish Dholakia
|
6b3cf217a4
|
Update ghcr_deploy.yml
|
2024-01-08 18:16:46 +05:30 |
|
Krrish Dholakia
|
e305dcf0a6
|
test(test_proxy_startup.py): separate tests
|
2024-01-08 17:58:37 +05:30 |
|
Krrish Dholakia
|
2bcfe28ee9
|
fix(proxy_server.py): improve /health/readiness endpoint to give more details on connected services
|
2024-01-08 17:45:00 +05:30 |
|
Krrish Dholakia
|
9470c7d9b8
|
build(Dockerfile): new dockerfile with prisma db setup
not many services allow you to pass docker build args, so we needed another way of setting this
|
2024-01-08 17:41:19 +05:30 |
|
Krrish Dholakia
|
8f8f961941
|
fix(proxy_server.py): add support for passing in config file via worker_config directly + testing
|
2024-01-08 16:47:15 +05:30 |
|
ishaan-jaff
|
fa74831d79
|
(docs) control proxy debug using env vars
|
2024-01-08 16:15:33 +05:30 |
|
ishaan-jaff
|
5d7646b30a
|
(fix) proxy - show detailed_debug logs
|
2024-01-08 15:34:24 +05:30 |
|
Krrish Dholakia
|
dd78782133
|
fix(utils.py): error handling for litellm --model mistral edge case
|
2024-01-08 15:09:01 +05:30 |
|
Krrish Dholakia
|
1ca7747371
|
fix(router.py): azure client init fix
|
2024-01-08 14:56:57 +05:30 |
|
Krrish Dholakia
|
1a480b3bd2
|
refactor: trigger dockerbuild
|
2024-01-08 14:42:28 +05:30 |
|
Ishaan Jaff
|
a70626d6e9
|
Merge pull request #1356 from BerriAI/litellm_improve_proxy_logs
[Feat] Improve Proxy Logging
|
2024-01-08 14:41:01 +05:30 |
|
Krrish Dholakia
|
ec83243521
|
fix(router.py): increasing connection pool limits for azure router
|
2024-01-08 14:39:49 +05:30 |
|
ishaan-jaff
|
96e8c2b4cf
|
(test) tg - ai - set max_tokens=5, fast test
|
2024-01-08 14:03:31 +05:30 |
|
ishaan-jaff
|
bf30e8fdb2
|
(test) router- verbose logs with fallbacks
|
2024-01-08 14:00:12 +05:30 |
|
ishaan-jaff
|
c5589e71e7
|
(docs) proxy, show users how to use detailed_debug
|
2024-01-08 12:58:23 +05:30 |
|
ishaan-jaff
|
6786e4f343
|
(feat) allow users to opt into detailed debug on proxy
|
2024-01-08 12:53:41 +05:30 |
|
Krrish Dholakia
|
f07daa3780
|
bump: version 1.16.18 → 1.16.19
|
2024-01-08 12:43:29 +05:30 |
|
Krrish Dholakia
|
6333fbfe56
|
fix(main.py): support cost calculation for text completion streaming object
|
2024-01-08 12:41:43 +05:30 |
|
ishaan-jaff
|
b4d9754dc2
|
(feat) verbose logs + fallbacks - working well
|
2024-01-08 12:33:09 +05:30 |
|
Krish Dholakia
|
442ebdde7c
|
Update ghcr_deploy.yml
|
2024-01-08 12:22:30 +05:30 |
|
Krrish Dholakia
|
9b46412279
|
fix(utils.py): fix logging for text completion streaming
|
2024-01-08 12:05:28 +05:30 |
|
Krish Dholakia
|
b4d624f332
|
Update ghcr_deploy.yml
always update latest tag
|
2024-01-08 11:47:12 +05:30 |
|
Krrish Dholakia
|
3d0ea08f77
|
refactor(gemini.py): fix linting issue
|
2024-01-08 11:43:33 +05:30 |
|
Krrish Dholakia
|
e70a5a8970
|
bump: version 1.16.17 → 1.16.18
|
2024-01-08 11:41:13 +05:30 |
|
Krrish Dholakia
|
b1fd0a164b
|
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
|
2024-01-08 11:40:56 +05:30 |
|
Krrish Dholakia
|
c720870f80
|
docs(gemini.md,-deploy.md): doc updates
|
2024-01-08 11:02:12 +05:30 |
|
Krish Dholakia
|
4ea3e778f7
|
Merge pull request #1315 from spdustin/feature_allow_claude_prefill
Adds "pre-fill" support for Claude
|
2024-01-08 10:48:15 +05:30 |
|
Krrish Dholakia
|
83306fe0c9
|
build(Dockerfile): fix pr merge issues
|
2024-01-08 10:38:19 +05:30 |
|
ishaan-jaff
|
f63f9d02cc
|
(feat) use '-debug' with proxy logger
|
2024-01-08 10:35:49 +05:30 |
|
Ishaan Jaff
|
5cfcd42763
|
Merge pull request #1311 from Manouchehri/patch-5
(caching) improve s3 backend
|
2024-01-08 09:47:57 +05:30 |
|
ishaan-jaff
|
7e4f5e5fbd
|
(feat) log what model is being used as a fallback
|
2024-01-08 09:41:24 +05:30 |
|