Krrish Dholakia
43353c28b3
feat(databricks.py): add embedding model support
2024-05-23 18:22:03 -07:00
Krrish Dholakia
d2229dcd21
feat(databricks.py): adds databricks support - completion, async, streaming
...
Closes https://github.com/BerriAI/litellm/issues/2160
2024-05-23 16:29:46 -07:00
Krish Dholakia
d69ad99e76
Merge pull request #3657 from phact/patch-4
...
Another dictionary changed size during iteration error
2024-05-20 17:45:50 -07:00
Krrish Dholakia
25df95ab10
feat(proxy_server.py): new 'supported_openai_params' endpoint
...
get supported openai params for a given model
2024-05-20 08:39:50 -07:00
Krrish Dholakia
12942c39db
fix(utils.py): drop response_format if 'drop_params=True' for gpt-4
2024-05-18 13:02:48 -07:00
Sebastián Estévez
9b7465a222
Another dictionary changed size during iteration error
...
```
ImportError while loading conftest '/astra-assistants-api/tests/openai-sdk/conftest.py'.
conftest.py:13: in <module>
from impl.astra_vector import CassandraClient
../../impl/astra_vector.py:45: in <module>
from impl.services.inference_utils import get_embeddings
../../impl/services/inference_utils.py:5: in <module>
import litellm
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/__init__.py:678: in <module>
from .main import * # type: ignore
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/main.py:73: in <module>
from .llms.azure_text import AzureTextCompletion
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/llms/azure_text.py:23: in <module>
openai_text_completion_config = OpenAITextCompletionConfig()
.cache/pypoetry/virtualenvs/astra-assistants-api-eiSmbCzm-py3.10/lib/python3.10/site-packages/litellm/llms/openai.py:192: in __init__
for key, value in locals_.items():
E RuntimeError: dictionary changed size during iteration
```
2024-05-15 17:06:54 -04:00
Krrish Dholakia
20456968e9
fix(openai.py): creat MistralConfig with response_format mapping for mistral api
2024-05-13 13:29:58 -07:00
Ishaan Jaff
66053f14ae
stream_options for text-completionopenai
2024-05-09 08:37:40 -07:00
Ishaan Jaff
1042051602
support stream_options for chat completion models
2024-05-08 21:52:25 -07:00
Krrish Dholakia
6575143460
feat(proxy_server.py): return litellm version in response headers
2024-05-08 16:00:08 -07:00
Krrish Dholakia
1195bf296b
fix(openai.py): fix typing import for python 3.8
2024-05-04 21:49:30 -07:00
Krrish Dholakia
f2bf6411d8
fix(openai.py): fix linting error
2024-05-04 21:48:42 -07:00
Krrish Dholakia
8fe6c9b401
feat(assistants/main.py): support litellm.get_assistants()
and litellm.get_messages()
2024-05-04 21:30:28 -07:00
Krrish Dholakia
cad01fb586
feat(assistants/main.py): support 'litellm.get_threads'
2024-05-04 21:14:03 -07:00
Krrish Dholakia
b7796c7487
feat(assistants/main.py): add 'add_message' endpoint
2024-05-04 19:56:11 -07:00
Krrish Dholakia
681a95e37b
fix(assistants/main.py): support litellm.create_thread()
call
2024-05-04 19:35:37 -07:00
Krrish Dholakia
84c31a5528
feat(openai.py): add support for openai assistants
...
v0 commit. Closes https://github.com/BerriAI/litellm/issues/2842
2024-05-04 17:27:48 -07:00
Krrish Dholakia
a732d8772a
fix(bedrock.py): convert httpx.timeout to boto3 valid timeout
...
Closes https://github.com/BerriAI/litellm/issues/3398
2024-05-03 16:24:21 -07:00
Krrish Dholakia
160acc085a
fix(router.py): fix default retry logic
2024-04-25 11:57:27 -07:00
Krrish Dholakia
48c2c3d78a
fix(utils.py): fix streaming to not return usage dict
...
Fixes https://github.com/BerriAI/litellm/issues/3237
2024-04-24 08:06:07 -07:00
Krrish Dholakia
475144e5b7
fix(openai.py): support passing prompt as list instead of concat string
2024-04-03 15:23:20 -07:00
Krrish Dholakia
15e0099948
fix(proxy_server.py): return original model response via response headers - /v1/completions
...
to help devs with debugging
2024-04-03 13:05:43 -07:00
Krrish Dholakia
919ec86b2b
fix(openai.py): switch to using openai sdk for text completion calls
2024-04-02 15:08:12 -07:00
Krrish Dholakia
b07788d2a5
fix(openai.py): return logprobs for text completion calls
2024-04-02 14:05:56 -07:00
Krrish Dholakia
ceabf726b0
fix(main.py): support max retries for transcription calls
2024-04-01 18:37:53 -07:00
Krrish Dholakia
0033613b9e
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
2024-03-12 10:30:10 -07:00
Krrish Dholakia
8d2d51b625
fix(utils.py): fix model name checking
2024-03-09 18:22:26 -08:00
Krrish Dholakia
fa45c569fd
feat: add cost tracking + caching for transcription calls
2024-03-09 15:43:38 -08:00
Krrish Dholakia
775997b283
fix(openai.py): fix async audio transcription
2024-03-08 23:33:54 -08:00
Krish Dholakia
caa99f43bf
Merge branch 'main' into litellm_load_balancing_transcription_endpoints
2024-03-08 23:08:47 -08:00
Krish Dholakia
e245b1c98a
Merge pull request #2401 from BerriAI/litellm_transcription_endpoints
...
feat(main.py): support openai transcription endpoints
2024-03-08 23:07:48 -08:00
Krrish Dholakia
0fb7afe820
feat(proxy_server.py): working /audio/transcription
endpoint
2024-03-08 18:20:27 -08:00
Krrish Dholakia
696eb54455
feat(main.py): support openai transcription endpoints
...
enable user to load balance between openai + azure transcription endpoints
2024-03-08 10:25:19 -08:00
ishaan-jaff
96e3696138
(fix) support name on perplexity/
2024-03-08 09:41:58 -08:00
Krrish Dholakia
17e1485fbe
refactor(openai.py): more logging around failed openai calls
2024-02-29 19:30:40 -08:00
ishaan-jaff
c315c18695
(fix) use api_base in health checks
2024-02-24 18:39:20 -08:00
Krrish Dholakia
c9e5c796ad
fix(factory.py): mistral message input fix
2024-02-08 20:54:26 -08:00
Krrish Dholakia
c49c88c8e5
fix(utils.py): route together ai calls to openai client
...
together ai is now openai-compatible
n
2024-02-03 19:22:48 -08:00
ishaan-jaff
e011c4a989
(fix) use OpenAI organization in ahealth_check
2024-01-30 11:45:22 -08:00
ishaan-jaff
ae4e273db7
(feat) OpenAI set organization
2024-01-30 10:54:56 -08:00
Krrish Dholakia
d755d50901
fix(openai.py): fix openai image gen logging
2024-01-26 21:05:49 -08:00
Krrish Dholakia
f19f0dad89
fix(router.py): fix client init
2024-01-22 22:15:39 -08:00
Krrish Dholakia
3e8c8ef507
fix(openai.py): fix linting issue
2024-01-22 18:20:15 -08:00
Krrish Dholakia
a7f182b8ec
fix(azure.py): support health checks to text completion endpoints
2024-01-12 00:13:01 +05:30
Krrish Dholakia
ed6ae8600f
fix(openai.py): fix exception raising logic
2024-01-09 11:58:30 +05:30
Krrish Dholakia
be1e101b5f
fix(azure.py,-openai.py): raise the correct exceptions for image generation calls
2024-01-09 11:55:38 +05:30
Krrish Dholakia
b1fd0a164b
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
...
https://github.com/BerriAI/litellm/issues/1334
2024-01-08 11:40:56 +05:30
Krrish Dholakia
f2ad13af65
fix(openai.py): fix image generation model dump
2024-01-06 17:55:32 +05:30
Krrish Dholakia
9a4a96f46e
perf(azure+openai-files): use model_dump instead of json.loads + model_dump_json
2024-01-06 15:50:05 +05:30
ishaan-jaff
79ab1aa35b
(fix) undo - model_dump_json() before logging
2024-01-05 11:47:16 +05:30