Commit graph

2095 commits

Author SHA1 Message Date
Krrish Dholakia
0bcfdafc58 fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
2024-07-30 18:15:00 -07:00
Krrish Dholakia
802e39b606 fix(utils.py): fix cost tracking for vertex ai partner models 2024-07-30 14:20:52 -07:00
Krish Dholakia
14c2aabf63 Merge pull request #4948 from dleen/response
fixes: #4947 Bedrock context exception does not have a response
2024-07-29 15:03:40 -07:00
David Leen
55cc3adbec fixes: #4947 Bedrock context exception does not have a response 2024-07-29 14:23:56 -07:00
Krrish Dholakia
00dde68001 fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
2024-07-29 13:04:41 -07:00
Krrish Dholakia
708b427a04 fix(utils.py): correctly re-raise azure api connection error
'
2024-07-29 12:28:25 -07:00
Krrish Dholakia
2a705dbb49 fix(utils.py): check if tools is iterable before indexing into it
Fixes https://github.com/BerriAI/litellm/issues/4933
2024-07-29 09:01:32 -07:00
Ravi N
5cf0667d38 Allow zero temperature for Sagemaker models based on config
Since Sagemaker can host any kind of model, some models allow
zero temperature. However, this is not enabled by default and
only allowed based on config
2024-07-28 21:55:53 -04:00
Krrish Dholakia
dc7df00581 fix: utils.py
fix supported openai params
2024-07-27 22:03:40 -07:00
Krish Dholakia
1c50339580 Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
2024-07-27 21:51:26 -07:00
Krish Dholakia
0525fb75f3 Merge branch 'main' into litellm_vertex_migration 2024-07-27 20:25:12 -07:00
Krrish Dholakia
fcac9bd2fa fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
2024-07-27 15:38:27 -07:00
Krrish Dholakia
70b281c0aa fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
2024-07-27 15:37:28 -07:00
Krrish Dholakia
56ba0c62f3 feat(utils.py): fix openai-like streaming 2024-07-27 15:32:57 -07:00
Krrish Dholakia
089539e21e fix(utils.py): add exception mapping for databricks errors 2024-07-27 13:13:31 -07:00
Krrish Dholakia
ce7257ec5e feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
2024-07-27 12:54:14 -07:00
Krrish Dholakia
3a1eedfbf3 feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
2024-07-26 21:51:54 -07:00
Krrish Dholakia
1562cba823 fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
2024-07-26 19:04:08 -07:00
Krrish Dholakia
d3ff21181c fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking 2024-07-25 22:12:07 -07:00
Ishaan Jaff
1103c614a0 Merge branch 'main' into litellm_proxy_support_all_providers 2024-07-25 20:15:37 -07:00
Krrish Dholakia
e7744177cb fix(utils.py): don't raise error on openai content filter during streaming - return as is
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
2024-07-25 19:50:52 -07:00
Krish Dholakia
a5cea7929d Merge branch 'main' into bedrock-llama3.1-405b 2024-07-25 19:29:10 -07:00
Ishaan Jaff
422b4d7e0f support using */* 2024-07-25 18:48:56 -07:00
Krrish Dholakia
9b1c7066b7 feat(utils.py): support async streaming for custom llm provider 2024-07-25 17:11:57 -07:00
Krrish Dholakia
bf23aac11d feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
Krrish Dholakia
54e1ca29b7 feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675

 Also Addresses https://github.com/BerriAI/litellm/discussions/4677
2024-07-25 15:33:05 -07:00
David Manouchehri
5a7be22038 Check for converse support first. 2024-07-25 21:16:23 +00:00
Krrish Dholakia
5945da4a66 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00
wslee
c2efb260c1 support dynamic api base 2024-07-25 11:14:38 +09:00
wslee
e7fbb7e40a add support for friendli dedicated endpoint 2024-07-25 11:14:35 +09:00
Ishaan Jaff
1e65173b88 add UnsupportedParamsError to litellm exceptions 2024-07-24 12:20:14 -07:00
Krrish Dholakia
23a3be184b build(model_prices_and_context_window.json): add model pricing for vertex ai llama 3.1 api 2024-07-23 17:36:07 -07:00
Krrish Dholakia
778afcee31 feat(vertex_ai_llama.py): vertex ai llama3.1 api support
Initial working commit for vertex ai llama 3.1 api support
2024-07-23 17:07:30 -07:00
Krrish Dholakia
271407400a fix(utils.py): support raw response headers for streaming requests 2024-07-23 11:58:58 -07:00
Krrish Dholakia
d55b516f3c feat(utils.py): support passing openai response headers to client, if enabled
Allows openai/openai-compatible provider response headers to be sent to client, if 'return_response_headers' is enabled
2024-07-23 11:30:52 -07:00
Ishaan Jaff
71c755d9a2 Merge pull request #3905 from giritatavarty-8451/litellm_triton_chatcompletion_support
Litellm triton chatcompletion support - Resubmit of #3895
2024-07-23 10:30:26 -07:00
Ishaan Jaff
8ae98008b3 fix raise correct provider on content policy violation 2024-07-22 16:03:15 -07:00
Ishaan Jaff
3bbb4e8f1d fix checking if _known_custom_logger_compatible_callbacks 2024-07-22 15:43:43 -07:00
Krrish Dholakia
98382a465a fix(utils.py): allow dropping extra_body in additional_drop_params
Fixes https://github.com/BerriAI/litellm/issues/4769
2024-07-20 19:12:58 -07:00
Ishaan Jaff
2dcbd5c534 rename to _response_headers 2024-07-20 17:31:16 -07:00
Ishaan Jaff
966733ed22 return response headers in response 2024-07-20 14:59:08 -07:00
Krish Dholakia
990444541c Merge pull request #4801 from BerriAI/litellm_dynamic_params_oai_compatible_endpoints
fix(utils.py): support dynamic params for openai-compatible providers
2024-07-19 21:07:06 -07:00
Krrish Dholakia
36ed00ec77 fix(utils.py): fix token_counter to handle empty tool calls in messages
Fixes https://github.com/BerriAI/litellm/pull/4749
2024-07-19 19:39:00 -07:00
Krrish Dholakia
a6e48db8b0 fix(utils.py): fix get_llm_provider to support dynamic params for openai-compatible providers 2024-07-19 19:36:31 -07:00
Krrish Dholakia
b838ff22d5 fix(utils.py): add exception mapping for bedrock image internal server error 2024-07-19 19:30:41 -07:00
Sophia Loris
adae0777d6 resolve merge conflicts 2024-07-19 09:45:53 -05:00
Sophia Loris
91fa69c0c2 Add support for Triton streaming & triton async completions 2024-07-19 09:35:27 -05:00
Krrish Dholakia
5d0bb0c6ee fix(utils.py): fix status code in exception mapping 2024-07-18 18:04:59 -07:00
Krish Dholakia
c010cd2dca Merge pull request #4729 from vingiarrusso/vgiarrusso/guardrails
Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook
2024-07-17 22:24:35 -07:00
Ishaan Jaff
b473e8da83 Merge pull request #4758 from BerriAI/litellm_langsmith_async_support
[Feat] Use Async Httpx client for langsmith logging
2024-07-17 16:54:40 -07:00