Ishaan Jaff
1c16904566
fix cache openai client for embeddings, text, speech
2024-05-31 21:35:03 -07:00
Ishaan Jaff
cedeb10a08
fix - linting error
2024-05-31 21:24:14 -07:00
Ishaan Jaff
6feeff1f31
feat - cache openai clients
2024-05-31 21:22:06 -07:00
Krish Dholakia
08bae3185a
Merge pull request #3936 from BerriAI/litellm_assistants_api_proxy
...
feat(proxy_server.py): add assistants api endpoints to proxy server
2024-05-31 18:43:22 -07:00
Krrish Dholakia
e2b34165e7
feat(proxy_server.py): add assistants api endpoints to proxy server
2024-05-30 22:44:43 -07:00
Krrish Dholakia
93166cdabf
fix(openai.py): fix openai response for /audio/speech
endpoint
2024-05-30 16:41:06 -07:00
Krrish Dholakia
a67cbf47f6
feat(main.py): support openai tts endpoint
...
Closes https://github.com/BerriAI/litellm/issues/3094
2024-05-30 14:28:28 -07:00
Krrish Dholakia
da56201e80
fix(main.py): pass api key and api base to openai.py for audio transcription call
2024-05-29 21:29:01 -07:00
Krrish Dholakia
3d32b00821
fix(openai.py): only allow 'user' as optional param if openai model
2024-05-29 15:15:02 -07:00
Ishaan Jaff
ca8163bbba
feat - add afile_content, file_content
2024-05-28 20:58:22 -07:00
Ishaan Jaff
6688215c18
feat - add aretrieve_batch
2024-05-28 17:12:41 -07:00
Ishaan Jaff
1ef7cd923c
feat - add acreate_batch
2024-05-28 17:03:29 -07:00
Ishaan Jaff
758ed9e923
feat - add litellm.acreate_file
2024-05-28 16:47:27 -07:00
Ishaan Jaff
38285e53c3
working create_batch
2024-05-28 15:45:23 -07:00
Ishaan Jaff
d5dbf084ed
feat - import batches in __init__
2024-05-28 15:35:11 -07:00
Krrish Dholakia
322a8218c0
fix(openai.py): fix deepinfra config optional param
2024-05-27 18:36:34 -07:00
Krrish Dholakia
f0f853b941
fix(utils.py): support deepinfra optional params
...
Fixes https://github.com/BerriAI/litellm/issues/3855
2024-05-27 09:16:56 -07:00
Krrish Dholakia
43353c28b3
feat(databricks.py): add embedding model support
2024-05-23 18:22:03 -07:00
Krrish Dholakia
d2229dcd21
feat(databricks.py): adds databricks support - completion, async, streaming
...
Closes https://github.com/BerriAI/litellm/issues/2160
2024-05-23 16:29:46 -07:00
Krish Dholakia
d69ad99e76
Merge pull request #3657 from phact/patch-4
...
Another dictionary changed size during iteration error
2024-05-20 17:45:50 -07:00
Krrish Dholakia
25df95ab10
feat(proxy_server.py): new 'supported_openai_params' endpoint
...
get supported openai params for a given model
2024-05-20 08:39:50 -07:00
Krrish Dholakia
12942c39db
fix(utils.py): drop response_format if 'drop_params=True' for gpt-4
2024-05-18 13:02:48 -07:00
Sebastián Estévez
9b7465a222
Another dictionary changed size during iteration error
...
```
ImportError while loading conftest '/astra-assistants-api/tests/openai-sdk/conftest.py'.
conftest.py:13: in <module>
from impl.astra_vector import CassandraClient
../../impl/astra_vector.py:45: in <module>
from impl.services.inference_utils import get_embeddings
../../impl/services/inference_utils.py:5: in <module>
import litellm
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/__init__.py:678: in <module>
from .main import * # type: ignore
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/main.py:73: in <module>
from .llms.azure_text import AzureTextCompletion
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/llms/azure_text.py:23: in <module>
openai_text_completion_config = OpenAITextCompletionConfig()
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/llms/openai.py:192: in __init__
for key, value in locals_.items():
E RuntimeError: dictionary changed size during iteration
```
2024-05-15 17:06:54 -04:00
Krrish Dholakia
20456968e9
fix(openai.py): creat MistralConfig with response_format mapping for mistral api
2024-05-13 13:29:58 -07:00
Ishaan Jaff
66053f14ae
stream_options for text-completionopenai
2024-05-09 08:37:40 -07:00
Ishaan Jaff
1042051602
support stream_options for chat completion models
2024-05-08 21:52:25 -07:00
Krrish Dholakia
6575143460
feat(proxy_server.py): return litellm version in response headers
2024-05-08 16:00:08 -07:00
Krrish Dholakia
1195bf296b
fix(openai.py): fix typing import for python 3.8
2024-05-04 21:49:30 -07:00
Krrish Dholakia
f2bf6411d8
fix(openai.py): fix linting error
2024-05-04 21:48:42 -07:00
Krrish Dholakia
8fe6c9b401
feat(assistants/main.py): support litellm.get_assistants()
and litellm.get_messages()
2024-05-04 21:30:28 -07:00
Krrish Dholakia
cad01fb586
feat(assistants/main.py): support 'litellm.get_threads'
2024-05-04 21:14:03 -07:00
Krrish Dholakia
b7796c7487
feat(assistants/main.py): add 'add_message' endpoint
2024-05-04 19:56:11 -07:00
Krrish Dholakia
681a95e37b
fix(assistants/main.py): support litellm.create_thread()
call
2024-05-04 19:35:37 -07:00
Krrish Dholakia
84c31a5528
feat(openai.py): add support for openai assistants
...
v0 commit. Closes https://github.com/BerriAI/litellm/issues/2842
2024-05-04 17:27:48 -07:00
Krrish Dholakia
a732d8772a
fix(bedrock.py): convert httpx.timeout to boto3 valid timeout
...
Closes https://github.com/BerriAI/litellm/issues/3398
2024-05-03 16:24:21 -07:00
Krrish Dholakia
160acc085a
fix(router.py): fix default retry logic
2024-04-25 11:57:27 -07:00
Krrish Dholakia
48c2c3d78a
fix(utils.py): fix streaming to not return usage dict
...
Fixes https://github.com/BerriAI/litellm/issues/3237
2024-04-24 08:06:07 -07:00
Krrish Dholakia
475144e5b7
fix(openai.py): support passing prompt as list instead of concat string
2024-04-03 15:23:20 -07:00
Krrish Dholakia
15e0099948
fix(proxy_server.py): return original model response via response headers - /v1/completions
...
to help devs with debugging
2024-04-03 13:05:43 -07:00
Krrish Dholakia
919ec86b2b
fix(openai.py): switch to using openai sdk for text completion calls
2024-04-02 15:08:12 -07:00
Krrish Dholakia
b07788d2a5
fix(openai.py): return logprobs for text completion calls
2024-04-02 14:05:56 -07:00
Krrish Dholakia
ceabf726b0
fix(main.py): support max retries for transcription calls
2024-04-01 18:37:53 -07:00
Krrish Dholakia
0033613b9e
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
2024-03-12 10:30:10 -07:00
Krrish Dholakia
8d2d51b625
fix(utils.py): fix model name checking
2024-03-09 18:22:26 -08:00
Krrish Dholakia
fa45c569fd
feat: add cost tracking + caching for transcription calls
2024-03-09 15:43:38 -08:00
Krrish Dholakia
775997b283
fix(openai.py): fix async audio transcription
2024-03-08 23:33:54 -08:00
Krish Dholakia
caa99f43bf
Merge branch 'main' into litellm_load_balancing_transcription_endpoints
2024-03-08 23:08:47 -08:00
Krish Dholakia
e245b1c98a
Merge pull request #2401 from BerriAI/litellm_transcription_endpoints
...
feat(main.py): support openai transcription endpoints
2024-03-08 23:07:48 -08:00
Krrish Dholakia
0fb7afe820
feat(proxy_server.py): working /audio/transcription
endpoint
2024-03-08 18:20:27 -08:00
Krrish Dholakia
696eb54455
feat(main.py): support openai transcription endpoints
...
enable user to load balance between openai + azure transcription endpoints
2024-03-08 10:25:19 -08:00