Commit graph

1171 commits

Author SHA1 Message Date
Krrish Dholakia
a765bae2b6 fix(http_handler.py): raise more detailed http status errors 2024-06-28 15:12:38 -07:00
Krrish Dholakia
66e3a4f30e fix(support-'alt=sse'-param): Fixes https://github.com/BerriAI/litellm/issues/4459 2024-06-28 10:40:47 -07:00
Krrish Dholakia
1d690fd3f3 fix(vertex_httpx.py): only use credential project id, if user project id not given 2024-06-27 22:08:14 -07:00
Krish Dholakia
134aa3cbd8 Merge pull request #4449 from BerriAI/litellm_azure_tts
feat(azure.py): azure tts support
2024-06-27 21:33:38 -07:00
Krish Dholakia
fd57294740 Merge pull request #4445 from ushuz/fix-google-creds
fix: do not resolve vertex project id from creds
2024-06-27 21:31:59 -07:00
Ishaan Jaff
5eba54b64f Merge pull request #4451 from BerriAI/litellm_azure_fix_pre_post_logging
[Fix] Azure Post-API Call occurs before Pre-API Call in CustomLogger
2024-06-27 20:18:14 -07:00
Krrish Dholakia
01064b4e23 fix(factory.py): get image type from response headers
Fixes https://github.com/BerriAI/litellm/issues/4441
2024-06-27 18:19:16 -07:00
Ishaan Jaff
1c263d057d azure - log post api call 2024-06-27 17:38:03 -07:00
Krrish Dholakia
58244ba71f feat(azure.py): azure tts support
Closes https://github.com/BerriAI/litellm/issues/4002
2024-06-27 16:59:25 -07:00
John HU
a48b2799bd Do not resolve project id from creds 2024-06-27 15:26:36 -07:00
Ishaan Jaff
be7b1aa498 fix raise better error message on reaching failed vertex import 2024-06-27 12:02:19 -07:00
Ishaan Jaff
8398065404 fix gemini test 2024-06-26 19:00:30 -07:00
Ishaan Jaff
339c5d17ed add initial support for volcengine 2024-06-26 16:53:44 -07:00
Ishaan Jaff
026a2aada5 Merge pull request #4431 from BerriAI/litellm_use_openai_client_proxy
[Fix-Proxy] Azure Embeddings use AsyncAzureOpenAI Client initialized on litellm.Router for requests
2024-06-26 16:37:31 -07:00
Krrish Dholakia
7fe4b3d8f4 fix(bedrock_httpx.py): Fix https://github.com/BerriAI/litellm/issues/4415 2024-06-26 16:19:46 -07:00
Ishaan Jaff
db4b9b8a84 fix - reuse client initialized on proxy config 2024-06-26 16:16:58 -07:00
Ishaan Jaff
7505a4be02 Merge pull request #4418 from BerriAI/litellm_fireworks_ai_tool_calling
[Feat] Add Fireworks AI Tool calling support
2024-06-26 08:30:06 -07:00
Krrish Dholakia
94441b3fbf docs(reliable_completions.md): improve headers for easier searching 2024-06-26 08:09:31 -07:00
Ishaan Jaff
6c200ee291 fix + test fireworks ai param mapping for tools 2024-06-26 06:57:04 -07:00
Ishaan Jaff
acfc97aa66 fix fireworks ai config 2024-06-26 06:40:44 -07:00
Ishaan Jaff
391f638f79 feat - add fireworks ai config for param mapping 2024-06-26 06:29:21 -07:00
Krrish Dholakia
91bbef4bcd fix(predibase.py): support json schema on predibase 2024-06-25 16:03:47 -07:00
Krrish Dholakia
1e51b8894f fix(utils.py): predibase exception mapping - map 424 as a badrequest error 2024-06-25 13:47:38 -07:00
Ishaan Jaff
ca372f7599 feat - add param mapping for nvidia nim 2024-06-25 09:13:08 -07:00
Ishaan Jaff
5021eec6a6 add nvidia nim to __init__ 2024-06-25 08:53:06 -07:00
Krish Dholakia
165fb72e31 Merge pull request #4373 from CorrM/main
[Fix-Improve] Improve Ollama prompt input and fix Ollama function calling key error
2024-06-25 06:27:43 -07:00
corrm
f5d20d72e4 Improve ollama prompt: this formula give good result with AutoGen 2024-06-25 13:53:27 +03:00
corrm
c7c74ac03f Rename ollama prompt:
- 'Function' word to 'FunctionName'
- 'Tool Call' to `FunctionCall`
- 'Tool Call Result' to 'FunctionCall Result'

_I found that changes make some models better_
2024-06-25 12:40:07 +03:00
Krish Dholakia
e97f9c530c Merge pull request #4392 from BerriAI/litellm_gemini_content_policy_errors
fix(vertex_httpx.py): cover gemini content violation (on prompt)
2024-06-24 20:00:06 -07:00
Krrish Dholakia
efaffd84e1 fix(utils.py): catch 422-status errors 2024-06-24 19:41:48 -07:00
Krrish Dholakia
cdf56dd320 fix(vertex_httpx.py): Return empty model response for content filter violations 2024-06-24 19:22:20 -07:00
Krrish Dholakia
62ff12c0b6 fix(vertex_httpx.py): cover gemini content violation (on prompt) 2024-06-24 19:13:56 -07:00
Islam Nofl
3a3ddd9117 Rename ollama prompt 'Function' word to 'Name' 2024-06-24 08:01:15 +03:00
corrm
51e7115c05 Added improved function name handling in ollama_async_streaming 2024-06-24 05:56:56 +03:00
corrm
93cb6d6175 chore: Improved OllamaConfig get_required_params and ollama_acompletion and ollama_async_streaming functions 2024-06-24 05:55:22 +03:00
corrm
2cf36f96bf chore: Improved prompt generation in ollama_pt function 2024-06-24 05:54:58 +03:00
Krrish Dholakia
4055381cdb fix(add-exception-mapping-+-langfuse-exception-logging-for-streaming-exceptions): add exception mapping + langfuse exception logging for streaming exceptions
Fixes https://github.com/BerriAI/litellm/issues/4338
2024-06-22 21:26:15 -07:00
Krrish Dholakia
14fdbf26a6 fix(vertex_httpx.py): flush remaining chunks from stream 2024-06-22 20:33:54 -07:00
Krrish Dholakia
2d8135231f fix(vertex_httpx.py): ignore vertex finish reason - wait for stream to end
Fixes https://github.com/BerriAI/litellm/issues/4339
2024-06-22 20:21:01 -07:00
Ishaan Jaff
f4db69a275 fix - /moderation don't require a model 2024-06-21 16:00:43 -07:00
Krrish Dholakia
80f405780f fix(vertex_httpx.py): cleanup docstring 2024-06-21 09:11:11 -07:00
Krrish Dholakia
60b4249767 fix(utils.py): re-integrate separate gemini optional param mapping (google ai studio)
Fixes https://github.com/BerriAI/litellm/issues/4333
2024-06-21 09:01:32 -07:00
Krrish Dholakia
c88c3002a4 fix(vertex_httpx.py): allow passing extra headers
Closes https://github.com/BerriAI/litellm/pull/4327
2024-06-20 21:14:44 -07:00
Krrish Dholakia
1f87f45709 fix(vertex_httpx.py): support setting custom api base for vertex ai calls
Closes https://github.com/BerriAI/litellm/issues/4317
2024-06-20 16:33:37 -07:00
Krrish Dholakia
e352f954b0 fix(vertex_ai.py): check if message length > 0 before merging 2024-06-19 18:47:43 -07:00
Krish Dholakia
8fe575a98d Merge pull request #4199 from hawktang/main
support vertex_credentials filepath
2024-06-19 18:47:32 -07:00
Krrish Dholakia
0a96865c48 fix(vertex_httpx.py): fix supports system message check for vertex_ai_beta
Fixes https://github.com/BerriAI/litellm/issues/4283
2024-06-19 13:17:22 -07:00
Krrish Dholakia
7966266d4f fix(main.py): route openai calls to /completion when text_completion is True 2024-06-19 12:37:05 -07:00
Krish Dholakia
4c5eb58fc9 Merge pull request #4266 from BerriAI/litellm_gemini_image_url
Support 'image url' to vertex ai / google ai studio gemini models
2024-06-18 20:39:25 -07:00
Krish Dholakia
fb01094d8b Merge pull request #4271 from BerriAI/litellm_vertex_httpx_fix
fix(vertex_httpx.py): Correctly handle Vertex content policy violation error
2024-06-18 20:19:46 -07:00