Commit graph

2083 commits

Author SHA1 Message Date
Krrish Dholakia
70b281c0aa fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
2024-07-27 15:37:28 -07:00
Krrish Dholakia
56ba0c62f3 feat(utils.py): fix openai-like streaming 2024-07-27 15:32:57 -07:00
Krrish Dholakia
089539e21e fix(utils.py): add exception mapping for databricks errors 2024-07-27 13:13:31 -07:00
Krrish Dholakia
ce7257ec5e feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
2024-07-27 12:54:14 -07:00
Krrish Dholakia
3a1eedfbf3 feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
2024-07-26 21:51:54 -07:00
Krrish Dholakia
1562cba823 fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
2024-07-26 19:04:08 -07:00
Krrish Dholakia
d3ff21181c fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking 2024-07-25 22:12:07 -07:00
Ishaan Jaff
1103c614a0 Merge branch 'main' into litellm_proxy_support_all_providers 2024-07-25 20:15:37 -07:00
Krrish Dholakia
e7744177cb fix(utils.py): don't raise error on openai content filter during streaming - return as is
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
2024-07-25 19:50:52 -07:00
Krish Dholakia
a5cea7929d Merge branch 'main' into bedrock-llama3.1-405b 2024-07-25 19:29:10 -07:00
Ishaan Jaff
422b4d7e0f support using */* 2024-07-25 18:48:56 -07:00
Krrish Dholakia
9b1c7066b7 feat(utils.py): support async streaming for custom llm provider 2024-07-25 17:11:57 -07:00
Krrish Dholakia
bf23aac11d feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
Krrish Dholakia
54e1ca29b7 feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675

 Also Addresses https://github.com/BerriAI/litellm/discussions/4677
2024-07-25 15:33:05 -07:00
David Manouchehri
5a7be22038 Check for converse support first. 2024-07-25 21:16:23 +00:00
Krrish Dholakia
5945da4a66 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00
wslee
c2efb260c1 support dynamic api base 2024-07-25 11:14:38 +09:00
wslee
e7fbb7e40a add support for friendli dedicated endpoint 2024-07-25 11:14:35 +09:00
Ishaan Jaff
1e65173b88 add UnsupportedParamsError to litellm exceptions 2024-07-24 12:20:14 -07:00
Krrish Dholakia
23a3be184b build(model_prices_and_context_window.json): add model pricing for vertex ai llama 3.1 api 2024-07-23 17:36:07 -07:00
Krrish Dholakia
778afcee31 feat(vertex_ai_llama.py): vertex ai llama3.1 api support
Initial working commit for vertex ai llama 3.1 api support
2024-07-23 17:07:30 -07:00
Krrish Dholakia
271407400a fix(utils.py): support raw response headers for streaming requests 2024-07-23 11:58:58 -07:00
Krrish Dholakia
d55b516f3c feat(utils.py): support passing openai response headers to client, if enabled
Allows openai/openai-compatible provider response headers to be sent to client, if 'return_response_headers' is enabled
2024-07-23 11:30:52 -07:00
Ishaan Jaff
71c755d9a2 Merge pull request #3905 from giritatavarty-8451/litellm_triton_chatcompletion_support
Litellm triton chatcompletion support - Resubmit of #3895
2024-07-23 10:30:26 -07:00
Ishaan Jaff
8ae98008b3 fix raise correct provider on content policy violation 2024-07-22 16:03:15 -07:00
Ishaan Jaff
3bbb4e8f1d fix checking if _known_custom_logger_compatible_callbacks 2024-07-22 15:43:43 -07:00
Krrish Dholakia
98382a465a fix(utils.py): allow dropping extra_body in additional_drop_params
Fixes https://github.com/BerriAI/litellm/issues/4769
2024-07-20 19:12:58 -07:00
Ishaan Jaff
2dcbd5c534 rename to _response_headers 2024-07-20 17:31:16 -07:00
Ishaan Jaff
966733ed22 return response headers in response 2024-07-20 14:59:08 -07:00
Krish Dholakia
990444541c Merge pull request #4801 from BerriAI/litellm_dynamic_params_oai_compatible_endpoints
fix(utils.py): support dynamic params for openai-compatible providers
2024-07-19 21:07:06 -07:00
Krrish Dholakia
36ed00ec77 fix(utils.py): fix token_counter to handle empty tool calls in messages
Fixes https://github.com/BerriAI/litellm/pull/4749
2024-07-19 19:39:00 -07:00
Krrish Dholakia
a6e48db8b0 fix(utils.py): fix get_llm_provider to support dynamic params for openai-compatible providers 2024-07-19 19:36:31 -07:00
Krrish Dholakia
b838ff22d5 fix(utils.py): add exception mapping for bedrock image internal server error 2024-07-19 19:30:41 -07:00
Sophia Loris
adae0777d6 resolve merge conflicts 2024-07-19 09:45:53 -05:00
Sophia Loris
91fa69c0c2 Add support for Triton streaming & triton async completions 2024-07-19 09:35:27 -05:00
Krrish Dholakia
5d0bb0c6ee fix(utils.py): fix status code in exception mapping 2024-07-18 18:04:59 -07:00
Krish Dholakia
c010cd2dca Merge pull request #4729 from vingiarrusso/vgiarrusso/guardrails
Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook
2024-07-17 22:24:35 -07:00
Ishaan Jaff
b473e8da83 Merge pull request #4758 from BerriAI/litellm_langsmith_async_support
[Feat] Use Async Httpx client for langsmith logging
2024-07-17 16:54:40 -07:00
Ishaan Jaff
1abd66db1b fix langsmith logging for streaming 2024-07-17 16:04:45 -07:00
Ishaan Jaff
d3ee7a947c use langsmith as a custom callback class 2024-07-17 15:35:13 -07:00
Krrish Dholakia
830c21b1fd fix(utils.py): return optional params from groq 2024-07-17 12:09:08 -07:00
Krrish Dholakia
cbd011a2aa fix(utils.py): fix linting error 2024-07-16 20:57:34 -07:00
Krrish Dholakia
5651a66b88 fix(utils.py): fix get_api_base to use vertexai_anthropic 2024-07-16 19:17:45 -07:00
Ishaan Jaff
a0e0fba583 fix install on python 3.8 2024-07-16 17:00:32 -07:00
Ishaan Jaff
44410b4e07 fix installing on python3.8 2024-07-16 16:56:15 -07:00
Vinnie Giarrusso
27dbafc1ac refactor a bit 2024-07-16 12:19:31 -07:00
Krish Dholakia
b6ede4eb1b Merge pull request #4716 from pamelafox/countfuncs
Add token counting for OpenAI tools/tool_choice
2024-07-16 07:21:31 -07:00
Vinnie Giarrusso
0535bd2d68 Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook 2024-07-16 01:52:08 -07:00
Krrish Dholakia
ae96837a26 fix(utils.py): allow passing dynamic api base for openai-compatible endpoints 2024-07-15 20:00:44 -07:00
Krrish Dholakia
4687b12732 fix(litellm_logging.py): log response_cost=0 for failed calls
Fixes https://github.com/BerriAI/litellm/issues/4604
2024-07-15 19:25:56 -07:00