Commit graph

626 commits

Author SHA1 Message Date
ishaan-jaff
cea0d6c8b0 (fix) litellm.acompletion with type hints 2024-01-11 10:47:12 +05:30
Ishaan Jaff
6e1be43595 Merge pull request #1200 from MateoCamara/explicit-args-acomplete
feat: added explicit args to acomplete
2024-01-11 10:39:05 +05:30
Krrish Dholakia
e71154f286 fix(main.py): fix streaming completion token counting error 2024-01-10 23:44:35 +05:30
Mateo Cámara
fb37ea291e Merge branch 'main' into explicit-args-acomplete 2024-01-09 13:07:37 +01:00
Mateo Cámara
8b84117367 Reverted changes made by the IDE automatically 2024-01-09 12:55:12 +01:00
ishaan-jaff
84271cb608 (feat) add exception mapping for litellm.image_generation 2024-01-09 16:54:47 +05:30
Mateo Cámara
6a9d846506 Added the new acompletion parameters based on CompletionRequest attributes 2024-01-09 12:05:31 +01:00
Krrish Dholakia
5daa3ce237 fix(main.py): support cost calculation for text completion streaming object 2024-01-08 12:41:43 +05:30
Krrish Dholakia
e4a5a3395c fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
2024-01-08 11:40:56 +05:30
Krrish Dholakia
176af67aac fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
2024-01-03 12:42:43 +05:30
ishaan-jaff
f582ef666f (fix) counting response tokens+streaming 2024-01-03 12:06:39 +05:30
ishaan-jaff
0e8809abf2 (feat) add xinference as an embedding provider 2024-01-02 15:32:26 +05:30
ishaan-jaff
6f1f40ef58 (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
ishaan-jaff
e8bebb2e14 (feat) add cache context manager 2023-12-30 19:32:51 +05:30
Krrish Dholakia
7d55a563ee fix(main.py): don't set timeout as an optional api param 2023-12-30 11:47:07 +05:30
ishaan-jaff
040c127104 (fix) batch_completions - set default timeout 2023-12-30 11:35:55 +05:30
Krrish Dholakia
e1925d0e29 fix(router.py): support retry and fallbacks for atext_completion 2023-12-30 11:19:32 +05:30
ishaan-jaff
d5cbef4e36 (feat) proxy - support dynamic timeout per request 2023-12-30 10:55:42 +05:30
ishaan-jaff
27f8598867 (feat) add cloudflare streaming 2023-12-29 12:01:26 +05:30
ishaan-jaff
b990fc8324 (feat) cloudflare ai workers - add completion support 2023-12-29 11:34:58 +05:30
Krrish Dholakia
a88f07dc60 fix(main.py): fix async text completion streaming + add new tests 2023-12-29 11:33:42 +05:30
ishaan-jaff
796e735881 (feat) v0 adding cloudflare 2023-12-29 09:32:29 +05:30
ishaan-jaff
2a147579ec (feat) add voyage ai embeddings 2023-12-28 17:10:15 +05:30
ishaan-jaff
12c6a00938 (feat) add mistral api embeddings 2023-12-28 16:41:55 +05:30
Krrish Dholakia
2285282ef8 feat(health_check.py): more detailed health check calls 2023-12-28 09:12:57 +05:30
ishaan-jaff
1100993834 (fix) use client for text_completion() 2023-12-27 15:20:26 +05:30
Krrish Dholakia
fd5e6efb1d fix(azure.py,-openai.py): correctly raise errors if streaming calls fail 2023-12-27 15:08:37 +05:30
Krrish Dholakia
2269f01c17 fix: fix linting issues 2023-12-27 12:21:31 +05:30
Krish Dholakia
fabfe42af3 Merge pull request #1248 from danikhan632/main
updated oobabooga to new api and support for embeddings
2023-12-27 11:33:56 +05:30
Ishaan Jaff
daead14f0c Merge pull request #1249 from evantancy/main
fix: helicone logging
2023-12-27 11:24:19 +05:30
evantancy
09d3972b64 fix: helicone logging 2023-12-27 12:16:29 +08:00
dan
c7be18cf46 updated oobabooga to new api and support for embeddings 2023-12-26 19:45:28 -05:00
ishaan-jaff
eb49826e4e (fix) support ollama_chat for acompletion 2023-12-26 20:01:51 +05:30
Krrish Dholakia
b25a8c3b42 fix(main.py): support ttl being set for completion, embedding, image generation calls 2023-12-26 17:22:40 +05:30
ishaan-jaff
105dacb6fa (chore) completion - move functions lower 2023-12-26 14:35:59 +05:30
ishaan-jaff
c1b1d0d15d (feat) support logprobs, top_logprobs openai 2023-12-26 14:00:42 +05:30
ishaan-jaff
6f19117fb3 (feat) add logprobs, top_logprobs to litellm.completion 2023-12-26 13:39:48 +05:30
ishaan-jaff
39ea228046 (feat) ollama chat 2023-12-25 23:04:17 +05:30
ishaan-jaff
edf2b60765 (feat) add ollama_chat v0 2023-12-25 14:27:10 +05:30
Krrish Dholakia
79978c44ba refactor: add black formatting 2023-12-25 14:11:20 +05:30
Krrish Dholakia
6d73a77b01 fix(proxy_server.py): raise streaming exceptions 2023-12-25 07:18:09 +05:30
Krrish Dholakia
70f4dabff6 feat(gemini.py): add support for completion calls for gemini-pro (google ai studio) 2023-12-24 09:42:58 +05:30
Krrish Dholakia
b7a7c3a4e5 feat(ollama.py): add support for async ollama embeddings 2023-12-23 18:01:25 +05:30
Krrish Dholakia
c084f04a35 fix(router.py): add support for async image generation endpoints 2023-12-21 14:38:44 +05:30
Mateo Cámara
e60e1afa53 feat: added explicit args to acomplete 2023-12-20 19:49:12 +01:00
Krrish Dholakia
a8f997eceb feat(main.py): add async image generation support 2023-12-20 16:58:40 +05:30
Krrish Dholakia
23d0278739 feat(azure.py): add support for azure image generations endpoint 2023-12-20 16:37:21 +05:30
Krrish Dholakia
636ac9b605 feat(ollama.py): add support for ollama function calling 2023-12-20 14:59:55 +05:30
Krish Dholakia
7e3f9d344c Merge branch 'main' into main 2023-12-18 17:54:34 -08:00
Krrish Dholakia
e03713ef74 fix(main.py): return async completion calls 2023-12-18 17:41:54 -08:00