Commit graph

1259 commits

Author SHA1 Message Date
Krrish Dholakia
89791d9285 fix(main.py): response_format typing for acompletion
Fixes https://github.com/BerriAI/litellm/issues/5239
2024-08-20 08:14:14 -07:00
Krrish Dholakia
49416e121c feat(azure.py): support dynamic api versions
Closes https://github.com/BerriAI/litellm/issues/5228
2024-08-19 12:17:43 -07:00
Krish Dholakia
a8dd2b6910
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 19:16:20 -07:00
Krrish Dholakia
7fce6b0163 fix(health_check.py): return 'missing mode' error message, if error with health check, and mode is missing 2024-08-16 17:24:29 -07:00
Krrish Dholakia
61f4b71ef7 refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
Ishaan Jaff
df4ea8fba6 refactor sagemaker to be async 2024-08-15 18:18:02 -07:00
Krrish Dholakia
583a3b330d fix(utils.py): support calling openai models via azure_ai/ 2024-08-14 13:41:04 -07:00
Krrish Dholakia
068ee12c30 fix(main.py): safely fail stream_chunk_builder calls 2024-08-10 10:22:26 -07:00
Krrish Dholakia
a858cc4d0c docs(main.py): clarify 'num_retries' usage 2024-08-09 16:57:06 -07:00
Krrish Dholakia
ba7b070883 fix(utils.py): set max_retries = num_retries, if given 2024-08-09 16:54:54 -07:00
Ishaan Jaff
e734568b5a fix cohere / cohere_chat when timeout is None 2024-08-09 12:10:02 -07:00
Krish Dholakia
2e434d56e3
Merge pull request #5079 from BerriAI/litellm_add_pydantic_model_support
feat(utils.py): support passing response_format as pydantic model
2024-08-07 14:43:05 -07:00
Krish Dholakia
93d048b1dc
Merge branch 'main' into litellm_anthropic_streaming_tool_call_fix 2024-08-07 14:33:30 -07:00
Krrish Dholakia
c0ef2e9dd0 fix(main.py): fix linting error for python3.8 2024-08-07 13:21:35 -07:00
Krish Dholakia
3605e873a1
Merge branch 'main' into litellm_add_pydantic_model_support 2024-08-07 13:07:46 -07:00
Krrish Dholakia
4919cc4d25 fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
Fixes https://github.com/BerriAI/litellm/issues/5063
2024-08-07 09:24:11 -07:00
Ishaan Jaff
dc3cdf3ed8 fix use extra headers for open router 2024-08-07 08:15:05 -07:00
Krrish Dholakia
9cf3d5f568 feat(utils.py): support passing response_format as pydantic model
Related issue - https://github.com/BerriAI/litellm/issues/5074
2024-08-06 18:16:07 -07:00
Krrish Dholakia
7bf1b4d661 fix(main.py): log hidden params for text completion calls 2024-08-05 21:26:48 -07:00
Krrish Dholakia
3c4c78a71f feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
2024-08-05 11:18:59 -07:00
Krish Dholakia
bca71019ad
Merge branch 'main' into litellm_anthropic_api_streaming 2024-08-03 21:16:50 -07:00
Krrish Dholakia
ac6c39c283 feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
2024-08-03 20:16:19 -07:00
Joe Cheng
b7be609d6e Use correct key name 2024-08-03 11:58:46 -07:00
Joe Cheng
33f4411f17 Fix tool call coalescing
The previous code seemed to assume that the tool call index property
started at 0, but Anthropic sometimes returns them starting at 1.
This was causing an extra null-ish tool call to be materialized.
2024-08-02 13:05:23 -07:00
Joe Cheng
90dd60fa71 fix(main.py): Handle bedrock tool calling in stream_chunk_builder
Fixes #5022.

The streaming chunks from Anthropic seem to violate an assumption
that is implicit in the stream_chunk_builder implementation: that
only tool_calls OR function_calls OR content will appear in a
streamed response. The repro in #5022 shows that you can get
content followed by tool calls.

These changes properly handle these combinations by building
separate lists of each type of chunk (note that in theory a chunk
could appear in multiple lists, e.g. both delta.tool_calls and
delta.content being present on one chunk).
2024-08-02 12:41:13 -07:00
Krish Dholakia
25ac9c2d75
Merge branch 'main' into litellm_fix_streaming_usage_calc 2024-08-01 21:29:04 -07:00
Krrish Dholakia
6e163d3c8a fix(vertex_ai_partner.py): add /chat/completion codestral support
Closes https://github.com/BerriAI/litellm/issues/4984
2024-08-01 18:06:40 -07:00
Krrish Dholakia
c6eabe0253 fix(main.py): fix linting error 2024-08-01 17:33:29 -07:00
Krrish Dholakia
010d5ed81d feat(vertex_ai_partner.py): add vertex ai codestral FIM support
Closes https://github.com/BerriAI/litellm/issues/4984
2024-08-01 17:10:27 -07:00
Krrish Dholakia
246b3227a9 fix(vertex_ai_partner.py): add /chat/completion codestral support
Closes https://github.com/BerriAI/litellm/issues/4984
2024-08-01 16:12:05 -07:00
Krrish Dholakia
ca0a0bed46 fix(utils.py): fix anthropic streaming usage calculation
Fixes https://github.com/BerriAI/litellm/issues/4965
2024-08-01 14:45:54 -07:00
Krish Dholakia
653aefde40
Merge branch 'main' into litellm_async_cohere_calls 2024-07-30 15:35:20 -07:00
Krrish Dholakia
9b2eb1702b fix(cohere.py): support async cohere embedding calls 2024-07-30 14:49:07 -07:00
Krrish Dholakia
99dc7d2e97 fix(main.py): fix linting error 2024-07-30 13:55:04 -07:00
Krrish Dholakia
69afbc6091 feat(huggingface_restapi.py): Support multiple hf embedding types + async hf embeddings
Closes https://github.com/BerriAI/litellm/issues/3261
2024-07-30 13:32:03 -07:00
Krrish Dholakia
7f0daafc56 docs(main.py): update acompletion_with_retries docstring
Closes https://github.com/BerriAI/litellm/issues/4908
2024-07-29 15:50:12 -07:00
Krish Dholakia
63531a9824
Merge pull request #4943 from dleen/logs
Fix: #4942. Remove verbose logging when exception can be handled
2024-07-29 12:12:28 -07:00
David Leen
452441ae03 Fix: #4942. Remove verbose logging when exception can be handled 2024-07-29 12:05:10 -07:00
Krrish Dholakia
66dbd938e8 fix(exceptions.py): use correct status code for content policy exceptions
Fixes https://github.com/BerriAI/litellm/issues/4941#issuecomment-2256578732
2024-07-29 12:01:54 -07:00
Krish Dholakia
e3a94ac013
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
2024-07-27 21:51:26 -07:00
Krish Dholakia
b854d2100c
Merge branch 'main' into litellm_vertex_migration 2024-07-27 20:25:12 -07:00
Ishaan Jaff
0627468455 fix checking mode on health checks 2024-07-27 20:21:39 -07:00
Krrish Dholakia
c85ed01756 feat(utils.py): fix openai-like streaming 2024-07-27 15:32:57 -07:00
Krrish Dholakia
5b71421a7b feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
2024-07-27 12:54:14 -07:00
Krrish Dholakia
41abd51240 fix(custom_llm.py): pass input params to custom llm 2024-07-25 19:03:52 -07:00
Krrish Dholakia
b4e3a77ad0 feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
Krrish Dholakia
9f97436308 fix(custom_llm.py): support async completion calls 2024-07-25 15:51:39 -07:00
Krrish Dholakia
6bf1b9353b feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675

 Also Addresses https://github.com/BerriAI/litellm/discussions/4677
2024-07-25 15:33:05 -07:00
Krrish Dholakia
4e51f712f3 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00
Krrish Dholakia
83ef52e180 feat(vertex_ai_llama.py): vertex ai llama3.1 api support
Initial working commit for vertex ai llama 3.1 api support
2024-07-23 17:07:30 -07:00