Commit graph

638 commits

Author SHA1 Message Date
ishaan-jaff
fd9bddc71a (v0) 2024-01-12 17:05:51 -08:00
Krrish Dholakia
51110bfb62 fix(main.py): support text completion routing 2024-01-12 11:24:31 +05:30
Krrish Dholakia
0cbdec563b refactor(main.py): trigger new release 2024-01-12 00:14:12 +05:30
Krrish Dholakia
a7f182b8ec fix(azure.py): support health checks to text completion endpoints 2024-01-12 00:13:01 +05:30
Krish Dholakia
817a3d29b7
Merge branch 'main' into litellm_embedding_caching_updates 2024-01-11 23:58:51 +05:30
Krrish Dholakia
43533812a7 fix(proxy_cli.py): read db url from config, not just environment 2024-01-11 19:19:29 +05:30
Krrish Dholakia
1378190dbf fix(main.py): init custom llm provider earlier 2024-01-11 18:30:10 +05:30
Krrish Dholakia
252c8415c6 fix(main.py): add back **kwargs for acompletion 2024-01-11 16:55:19 +05:30
Krrish Dholakia
2cd5f0fbe9 fix(utils.py): support caching individual items in embedding input list
https://github.com/BerriAI/litellm/issues/1350
2024-01-11 16:51:34 +05:30
Krrish Dholakia
df9df7b040 fix: n 2024-01-11 16:30:05 +05:30
ishaan-jaff
f89385eed8 (fix) acompletion kwargs type hints 2024-01-11 14:22:37 +05:30
ishaan-jaff
bd5a14daf6 (fix) acompletion typehints - pass kwargs 2024-01-11 11:49:55 +05:30
ishaan-jaff
cf86af46a8 (fix) litellm.acompletion with type hints 2024-01-11 10:47:12 +05:30
Ishaan Jaff
2433d6c613
Merge pull request #1200 from MateoCamara/explicit-args-acomplete
feat: added explicit args to acomplete
2024-01-11 10:39:05 +05:30
Krrish Dholakia
61f2fe5837 fix(main.py): fix streaming completion token counting error 2024-01-10 23:44:35 +05:30
Mateo Cámara
203089e6c7
Merge branch 'main' into explicit-args-acomplete 2024-01-09 13:07:37 +01:00
Mateo Cámara
0ec976b3d1 Reverted changes made by the IDE automatically 2024-01-09 12:55:12 +01:00
ishaan-jaff
170ae74118 (feat) add exception mapping for litellm.image_generation 2024-01-09 16:54:47 +05:30
Mateo Cámara
48b2f69c93 Added the new acompletion parameters based on CompletionRequest attributes 2024-01-09 12:05:31 +01:00
Krrish Dholakia
6333fbfe56 fix(main.py): support cost calculation for text completion streaming object 2024-01-08 12:41:43 +05:30
Krrish Dholakia
b1fd0a164b fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
2024-01-08 11:40:56 +05:30
Krrish Dholakia
8cee267a5b fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
2024-01-03 12:42:43 +05:30
ishaan-jaff
f3b8d9c3ef (fix) counting response tokens+streaming 2024-01-03 12:06:39 +05:30
ishaan-jaff
790dcff5e0 (feat) add xinference as an embedding provider 2024-01-02 15:32:26 +05:30
ishaan-jaff
70cdc16d6f (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
ddddfe6602 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
77be3e3114 fix(main.py): don't set timeout as an optional api param 2023-12-30 11:47:07 +05:30
ishaan-jaff
aee38d9329 (fix) batch_completions - set default timeout 2023-12-30 11:35:55 +05:30
Krrish Dholakia
38f55249e1 fix(router.py): support retry and fallbacks for atext_completion 2023-12-30 11:19:32 +05:30
ishaan-jaff
2f4cd3b569 (feat) proxy - support dynamic timeout per request 2023-12-30 10:55:42 +05:30
ishaan-jaff
ee682be093 (feat) add cloudflare streaming 2023-12-29 12:01:26 +05:30
ishaan-jaff
8fcfb7df22 (feat) cloudflare ai workers - add completion support 2023-12-29 11:34:58 +05:30
Krrish Dholakia
6f2734100f fix(main.py): fix async text completion streaming + add new tests 2023-12-29 11:33:42 +05:30
ishaan-jaff
367e9913dc (feat) v0 adding cloudflare 2023-12-29 09:32:29 +05:30
ishaan-jaff
95e6d2fbba (feat) add voyage ai embeddings 2023-12-28 17:10:15 +05:30
ishaan-jaff
78f0c0228b (feat) add mistral api embeddings 2023-12-28 16:41:55 +05:30
Krrish Dholakia
3b1685e7c6 feat(health_check.py): more detailed health check calls 2023-12-28 09:12:57 +05:30
ishaan-jaff
f4fe2575cc (fix) use client for text_completion() 2023-12-27 15:20:26 +05:30
Krrish Dholakia
c9fdbaf898 fix(azure.py,-openai.py): correctly raise errors if streaming calls fail 2023-12-27 15:08:37 +05:30
Krrish Dholakia
c88a8d71f0 fix: fix linting issues 2023-12-27 12:21:31 +05:30
Krish Dholakia
5c3a61d62f
Merge pull request #1248 from danikhan632/main
updated oobabooga to new api and support for embeddings
2023-12-27 11:33:56 +05:30
Ishaan Jaff
22d0c21829
Merge pull request #1249 from evantancy/main
fix: helicone logging
2023-12-27 11:24:19 +05:30
evantancy
668c786099 fix: helicone logging 2023-12-27 12:16:29 +08:00
dan
c4dfd9be7c updated oobabooga to new api and support for embeddings 2023-12-26 19:45:28 -05:00
ishaan-jaff
751d57379d (fix) support ollama_chat for acompletion 2023-12-26 20:01:51 +05:30
Krrish Dholakia
f0b6b9dce2 fix(main.py): support ttl being set for completion, embedding, image generation calls 2023-12-26 17:22:40 +05:30
ishaan-jaff
a463625452 (chore) completion - move functions lower 2023-12-26 14:35:59 +05:30
ishaan-jaff
7b097305c1 (feat) support logprobs, top_logprobs openai 2023-12-26 14:00:42 +05:30
ishaan-jaff
0b0d22d58c (feat) add logprobs, top_logprobs to litellm.completion 2023-12-26 13:39:48 +05:30
ishaan-jaff
8c35aebdf8 (feat) ollama chat 2023-12-25 23:04:17 +05:30