Commit graph

197 commits

Author SHA1 Message Date
Krrish Dholakia
18b67a455e test: fix test 2024-08-27 10:46:57 -07:00
Krrish Dholakia
87549a2391 fix(main.py): cover openai /v1/completions endpoint 2024-08-24 13:25:17 -07:00
Krrish Dholakia
de2373d52b fix(openai.py): coverage for correctly re-raising exception headers on openai chat completion + embedding endpoints 2024-08-24 12:55:15 -07:00
Krrish Dholakia
068aafdff9 fix(utils.py): correctly re-raise the headers from an exception, if present
Fixes issue where retry after on router was not using azure / openai numbers
2024-08-24 12:30:30 -07:00
Mike
a13a506573 Add the "stop" parameter to the mistral API interface, it is now supported 2024-08-16 23:29:22 +00:00
Krrish Dholakia
dd2ea72cb4 fix(openai.py): fix position of invalid_params param 2024-08-10 09:52:27 -07:00
Krrish Dholakia
fe2aa706e8 refactor(openai/azure.py): move to returning openai/azure response headers by default
Allows token tracking to work more reliably across multiple azure/openai deployments
2024-08-02 09:42:08 -07:00
Ishaan Jaff
66211b42db fix linting errors 2024-07-30 12:51:39 -07:00
Ishaan Jaff
43a06f408c feat add support for alist_batches 2024-07-30 08:18:52 -07:00
Krrish Dholakia
5b71421a7b feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
2024-07-27 12:54:14 -07:00
Ishaan Jaff
2541d5f625 add verbose_logger.debug to retrieve batch 2024-07-26 18:26:39 -07:00
Ishaan Jaff
2432c90515 feat - support health check audio_speech 2024-07-25 17:26:14 -07:00
Ishaan Jaff
e3142b4294 fix whisper health check with litellm 2024-07-25 17:22:57 -07:00
Krrish Dholakia
f4a388f217 fix(openai.py): check if error body is a dictionary before indexing in 2024-07-22 18:12:04 -07:00
Ishaan Jaff
b901a65572 fix make_sync_openai_audio_transcriptions_request 2024-07-20 20:03:12 -07:00
Ishaan Jaff
66c73e4425 fix merge conflicts 2024-07-20 19:14:20 -07:00
Ishaan Jaff
f6225623e9
Merge branch 'main' into litellm_return-response_headers 2024-07-20 19:05:56 -07:00
Ishaan Jaff
82764d2cec fix make_sync_openai_audio_transcriptions_request 2024-07-20 18:17:21 -07:00
Ishaan Jaff
5e4d291244 rename to _response_headers 2024-07-20 17:31:16 -07:00
Ishaan Jaff
5e52f50a82 return response headers 2024-07-20 15:26:44 -07:00
Krrish Dholakia
a27454b8e3 fix(openai.py): support completion, streaming, async_streaming 2024-07-20 15:23:42 -07:00
Krrish Dholakia
86c9e05c10 fix(openai.py): drop invalid params if drop_params: true for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4800
2024-07-20 15:08:15 -07:00
Ishaan Jaff
3427838ce5 openai - return response headers 2024-07-20 15:04:27 -07:00
Ishaan Jaff
64dbe07593 openai return response headers 2024-07-20 14:07:41 -07:00
Ishaan Jaff
ee33a80486 fix remove index from tool calls cohere error 2024-07-16 21:49:45 -07:00
Ishaan Jaff
a542e7be61 add all openai file endpoints 2024-07-10 15:35:21 -07:00
Ishaan Jaff
99fd388943 add retrive file to litellm SDK 2024-07-10 14:51:48 -07:00
Ishaan Jaff
5587dbbd32 add async assistants delete support 2024-07-10 11:14:40 -07:00
Ishaan Jaff
5bf430f201 add delete assistant SDK 2024-07-10 10:33:00 -07:00
Ishaan Jaff
f4f07e13f3 add acreate_assistants 2024-07-09 09:33:41 -07:00
Ishaan Jaff
9e22ce905e add create_assistants 2024-07-09 08:51:42 -07:00
Krrish Dholakia
298505c47c fix(whisper---handle-openai/azure-vtt-response-format): Fixes https://github.com/BerriAI/litellm/issues/4595 2024-07-08 09:10:40 -07:00
Krrish Dholakia
bb905d7243 fix(utils.py): support 'drop_params' for 'parallel_tool_calls'
Closes https://github.com/BerriAI/litellm/issues/4584

 OpenAI-only param
2024-07-08 07:36:41 -07:00
Ishaan Jaff
4033302656 feat - return headers for openai audio transcriptions 2024-07-01 20:27:27 -07:00
Ishaan Jaff
04a975d486 feat - add response_headers in litellm_logging_obj 2024-07-01 17:25:15 -07:00
Ishaan Jaff
140f7fe254 return azure response headers 2024-07-01 17:09:06 -07:00
Ishaan Jaff
4b7feb3261 feat - return response headers for async openai requests 2024-07-01 17:01:42 -07:00
Krrish Dholakia
d10912beeb fix(main.py): pass in openrouter as custom provider for openai client call
Fixes https://github.com/BerriAI/litellm/issues/4414
2024-06-28 21:26:42 -07:00
Ishaan Jaff
b7bca0af6c fix - reuse client initialized on proxy config 2024-06-26 16:16:58 -07:00
Ishaan Jaff
eb8a9b2654 fix - /moderation don't require a model 2024-06-21 16:00:43 -07:00
Krrish Dholakia
9cc104eb03 fix(main.py): route openai calls to /completion when text_completion is True 2024-06-19 12:37:05 -07:00
Krrish Dholakia
5ad095ad9d fix(openai.py): deepinfra function calling - drop_params support for unsupported tool choice value 2024-06-18 16:19:57 -07:00
Ishaan Jaff
e128dc4e1f feat - add azure ai studio models on litellm ui 2024-06-12 20:28:16 -07:00
Ishaan Jaff
7eeef7ec1f feat - add mistral embedding config 2024-06-12 15:00:00 -07:00
Krrish Dholakia
650ea6d0c3 feat(assistants/main.py): support arun_thread_stream 2024-06-04 16:47:51 -07:00
Krrish Dholakia
f3d78532f9 feat(assistants/main.py): add assistants api streaming support 2024-06-04 16:30:35 -07:00
Krrish Dholakia
7163bce37b feat(assistants/main.py): Closes https://github.com/BerriAI/litellm/issues/3993 2024-06-03 18:47:05 -07:00
Krrish Dholakia
93c9ea160d fix(openai.py): fix client caching logic 2024-06-01 16:45:56 -07:00
Ishaan Jaff
47dd52c566 fix used hashed api key 2024-06-01 09:24:16 -07:00
Ishaan Jaff
47337c172e fix - in memory client cache 2024-06-01 08:58:22 -07:00