Krrish Dholakia
|
0bcca3fed3
|
refactor(main.py): trigger rebuild
|
2024-01-13 15:55:56 +05:30 |
|
ishaan-jaff
|
f7bdee69fb
|
(fix) always check if response has hidden_param attr
|
2024-01-12 17:51:34 -08:00 |
|
ishaan-jaff
|
e83c70ea55
|
(feat) set custom_llm_provider for embedding hidden params
|
2024-01-12 17:35:08 -08:00 |
|
ishaan-jaff
|
f9271b59b4
|
(v0)
|
2024-01-12 17:05:51 -08:00 |
|
Krrish Dholakia
|
becdabe837
|
fix(main.py): support text completion routing
|
2024-01-12 11:24:31 +05:30 |
|
Krrish Dholakia
|
cbb021c9af
|
refactor(main.py): trigger new release
|
2024-01-12 00:14:12 +05:30 |
|
Krrish Dholakia
|
0e1ea4325c
|
fix(azure.py): support health checks to text completion endpoints
|
2024-01-12 00:13:01 +05:30 |
|
Krish Dholakia
|
7ecfc09221
|
Merge branch 'main' into litellm_embedding_caching_updates
|
2024-01-11 23:58:51 +05:30 |
|
Krrish Dholakia
|
36068b707a
|
fix(proxy_cli.py): read db url from config, not just environment
|
2024-01-11 19:19:29 +05:30 |
|
Krrish Dholakia
|
f3b7e98da7
|
fix(main.py): init custom llm provider earlier
|
2024-01-11 18:30:10 +05:30 |
|
Krrish Dholakia
|
4de82617c0
|
fix(main.py): add back **kwargs for acompletion
|
2024-01-11 16:55:19 +05:30 |
|
Krrish Dholakia
|
66addb1a01
|
fix(utils.py): support caching individual items in embedding input list
https://github.com/BerriAI/litellm/issues/1350
|
2024-01-11 16:51:34 +05:30 |
|
Krrish Dholakia
|
1472dc3f54
|
fix: n
|
2024-01-11 16:30:05 +05:30 |
|
ishaan-jaff
|
c41b47dc8b
|
(fix) acompletion kwargs type hints
|
2024-01-11 14:22:37 +05:30 |
|
ishaan-jaff
|
29393fb512
|
(fix) acompletion typehints - pass kwargs
|
2024-01-11 11:49:55 +05:30 |
|
ishaan-jaff
|
cea0d6c8b0
|
(fix) litellm.acompletion with type hints
|
2024-01-11 10:47:12 +05:30 |
|
Ishaan Jaff
|
6e1be43595
|
Merge pull request #1200 from MateoCamara/explicit-args-acomplete
feat: added explicit args to acomplete
|
2024-01-11 10:39:05 +05:30 |
|
Krrish Dholakia
|
e71154f286
|
fix(main.py): fix streaming completion token counting error
|
2024-01-10 23:44:35 +05:30 |
|
Mateo Cámara
|
fb37ea291e
|
Merge branch 'main' into explicit-args-acomplete
|
2024-01-09 13:07:37 +01:00 |
|
Mateo Cámara
|
8b84117367
|
Reverted changes made by the IDE automatically
|
2024-01-09 12:55:12 +01:00 |
|
ishaan-jaff
|
84271cb608
|
(feat) add exception mapping for litellm.image_generation
|
2024-01-09 16:54:47 +05:30 |
|
Mateo Cámara
|
6a9d846506
|
Added the new acompletion parameters based on CompletionRequest attributes
|
2024-01-09 12:05:31 +01:00 |
|
Krrish Dholakia
|
5daa3ce237
|
fix(main.py): support cost calculation for text completion streaming object
|
2024-01-08 12:41:43 +05:30 |
|
Krrish Dholakia
|
e4a5a3395c
|
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
|
2024-01-08 11:40:56 +05:30 |
|
Krrish Dholakia
|
176af67aac
|
fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
|
2024-01-03 12:42:43 +05:30 |
|
ishaan-jaff
|
f582ef666f
|
(fix) counting response tokens+streaming
|
2024-01-03 12:06:39 +05:30 |
|
ishaan-jaff
|
0e8809abf2
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
ishaan-jaff
|
6f1f40ef58
|
(feat) cache context manager - update cache
|
2023-12-30 19:50:53 +05:30 |
|
ishaan-jaff
|
e8bebb2e14
|
(feat) add cache context manager
|
2023-12-30 19:32:51 +05:30 |
|
Krrish Dholakia
|
7d55a563ee
|
fix(main.py): don't set timeout as an optional api param
|
2023-12-30 11:47:07 +05:30 |
|
ishaan-jaff
|
040c127104
|
(fix) batch_completions - set default timeout
|
2023-12-30 11:35:55 +05:30 |
|
Krrish Dholakia
|
e1925d0e29
|
fix(router.py): support retry and fallbacks for atext_completion
|
2023-12-30 11:19:32 +05:30 |
|
ishaan-jaff
|
d5cbef4e36
|
(feat) proxy - support dynamic timeout per request
|
2023-12-30 10:55:42 +05:30 |
|
ishaan-jaff
|
27f8598867
|
(feat) add cloudflare streaming
|
2023-12-29 12:01:26 +05:30 |
|
ishaan-jaff
|
b990fc8324
|
(feat) cloudflare ai workers - add completion support
|
2023-12-29 11:34:58 +05:30 |
|
Krrish Dholakia
|
a88f07dc60
|
fix(main.py): fix async text completion streaming + add new tests
|
2023-12-29 11:33:42 +05:30 |
|
ishaan-jaff
|
796e735881
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
2a147579ec
|
(feat) add voyage ai embeddings
|
2023-12-28 17:10:15 +05:30 |
|
ishaan-jaff
|
12c6a00938
|
(feat) add mistral api embeddings
|
2023-12-28 16:41:55 +05:30 |
|
Krrish Dholakia
|
2285282ef8
|
feat(health_check.py): more detailed health check calls
|
2023-12-28 09:12:57 +05:30 |
|
ishaan-jaff
|
1100993834
|
(fix) use client for text_completion()
|
2023-12-27 15:20:26 +05:30 |
|
Krrish Dholakia
|
fd5e6efb1d
|
fix(azure.py,-openai.py): correctly raise errors if streaming calls fail
|
2023-12-27 15:08:37 +05:30 |
|
Krrish Dholakia
|
2269f01c17
|
fix: fix linting issues
|
2023-12-27 12:21:31 +05:30 |
|
Krish Dholakia
|
fabfe42af3
|
Merge pull request #1248 from danikhan632/main
updated oobabooga to new api and support for embeddings
|
2023-12-27 11:33:56 +05:30 |
|
Ishaan Jaff
|
daead14f0c
|
Merge pull request #1249 from evantancy/main
fix: helicone logging
|
2023-12-27 11:24:19 +05:30 |
|
evantancy
|
09d3972b64
|
fix: helicone logging
|
2023-12-27 12:16:29 +08:00 |
|
dan
|
c7be18cf46
|
updated oobabooga to new api and support for embeddings
|
2023-12-26 19:45:28 -05:00 |
|
ishaan-jaff
|
eb49826e4e
|
(fix) support ollama_chat for acompletion
|
2023-12-26 20:01:51 +05:30 |
|
Krrish Dholakia
|
b25a8c3b42
|
fix(main.py): support ttl being set for completion, embedding, image generation calls
|
2023-12-26 17:22:40 +05:30 |
|
ishaan-jaff
|
105dacb6fa
|
(chore) completion - move functions lower
|
2023-12-26 14:35:59 +05:30 |
|