Commit graph

1935 commits

Author SHA1 Message Date
Krrish Dholakia
fdd9a07051 fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
2024-08-12 14:00:43 -07:00
Krrish Dholakia
19bb95f781 build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
2024-08-10 14:15:12 -07:00
Krrish Dholakia
0ea056971c docs(prefix.md): add prefix support to docs 2024-08-10 13:55:47 -07:00
Krrish Dholakia
068ee12c30 fix(main.py): safely fail stream_chunk_builder calls 2024-08-10 10:22:26 -07:00
Krrish Dholakia
76785cfb6a fix(utils.py): handle anthropic overloaded error 2024-08-08 17:18:19 -07:00
Ishaan Jaff
68a36600c2 fix use get_file_check_sum 2024-08-08 08:03:08 -07:00
Krrish Dholakia
a15317a377 fix(vertex_ai_partner.py): pass model for llama3 param mapping 2024-08-07 18:07:14 -07:00
Krrish Dholakia
37dc359efb fix(utils.py): fix linting error for python3.8 2024-08-07 13:14:29 -07:00
Krish Dholakia
3605e873a1
Merge branch 'main' into litellm_add_pydantic_model_support 2024-08-07 13:07:46 -07:00
Krrish Dholakia
ff386f6b60 fix(utils.py): support deepseek tool calling
Fixes https://github.com/BerriAI/litellm/issues/5081
2024-08-07 11:14:05 -07:00
Krrish Dholakia
2dd27a4e12 feat(utils.py): support validating json schema client-side if user opts in 2024-08-06 19:35:33 -07:00
Krrish Dholakia
5dfde2ee0b feat: Translate openai 'response_format' json_schema to 'response_schema' for vertex ai + google ai studio
Closes https://github.com/BerriAI/litellm/issues/5074
2024-08-06 19:06:14 -07:00
Krrish Dholakia
9cf3d5f568 feat(utils.py): support passing response_format as pydantic model
Related issue - https://github.com/BerriAI/litellm/issues/5074
2024-08-06 18:16:07 -07:00
Ishaan Jaff
aa06df4101 use file size _ name to get file check sum 2024-08-06 15:18:50 -07:00
Ishaan Jaff
c19066e78e use file_checksum 2024-08-06 13:55:22 -07:00
Krrish Dholakia
34213edb91 fix(utils.py): fix dynamic api base 2024-08-06 11:27:39 -07:00
Krrish Dholakia
511f4d33d1 feat(utils.py): check env var for api base for openai-compatible endpoints
Closes https://github.com/BerriAI/litellm/issues/5066
2024-08-06 08:32:44 -07:00
Krrish Dholakia
3c4c78a71f feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
2024-08-05 11:18:59 -07:00
Krrish Dholakia
ed8b20fa18 fix(utils.py): parse out aws specific params from openai call
Fixes https://github.com/BerriAI/litellm/issues/5009
2024-08-03 12:04:44 -07:00
Krrish Dholakia
4258295a07 feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
2024-08-03 09:11:22 -07:00
Krish Dholakia
5f13d2ee64
Merge pull request #5029 from BerriAI/litellm_azure_ui_fix
fix(utils.py): Fix adding azure models on ui
2024-08-02 22:12:19 -07:00
Krrish Dholakia
5d96ff6694 fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
2024-08-02 17:48:53 -07:00
Ishaan Jaff
7ec1f241fc
Merge pull request #5026 from BerriAI/litellm_fix_whisper_caching
[Fix] Whisper Caching - Use correct cache keys for checking request in cache
2024-08-02 17:26:28 -07:00
Ishaan Jaff
ec3b0d0d0b return cache hit True on cache hits 2024-08-02 15:07:05 -07:00
Ishaan Jaff
1b3bc32090 log correct file name on langfuse 2024-08-02 14:49:25 -07:00
Krrish Dholakia
0a30ba9674 fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
2024-08-02 09:30:50 -07:00
Haadi Rakhangi
5439e72a6b
Merge branch 'BerriAI:main' into main 2024-08-02 21:08:48 +05:30
Haadi Rakhangi
851db5ecea qdrant semantic caching added 2024-08-02 21:07:19 +05:30
Krrish Dholakia
fe7e68adc8 fix(utils.py): fix codestral streaming 2024-08-02 07:38:06 -07:00
Krrish Dholakia
4c2ef8ea64 fix(bedrock_httpx.py): fix ai21 streaming 2024-08-01 22:03:24 -07:00
Krish Dholakia
25ac9c2d75
Merge branch 'main' into litellm_fix_streaming_usage_calc 2024-08-01 21:29:04 -07:00
Krish Dholakia
e7f83602d8
Merge branch 'main' into litellm_response_cost_logging 2024-08-01 21:28:22 -07:00
Krrish Dholakia
010d5ed81d feat(vertex_ai_partner.py): add vertex ai codestral FIM support
Closes https://github.com/BerriAI/litellm/issues/4984
2024-08-01 17:10:27 -07:00
Krrish Dholakia
28c12e6702 fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
Krrish Dholakia
ca0a0bed46 fix(utils.py): fix anthropic streaming usage calculation
Fixes https://github.com/BerriAI/litellm/issues/4965
2024-08-01 14:45:54 -07:00
Krrish Dholakia
10b571ca42 fix(litellm_logging.py): use 1 cost calc function across response headers + logging integrations
Ensures consistent cost calculation when azure base models are used
2024-08-01 10:26:59 -07:00
Krrish Dholakia
1a00f52255 fix(utils.py): fix special keys list for provider-specific items in response object 2024-07-31 18:30:49 -07:00
Krrish Dholakia
09ee8c6e2d fix(utils.py): return additional kwargs from openai-like response body
Closes https://github.com/BerriAI/litellm/issues/4981
2024-07-31 15:37:03 -07:00
Krrish Dholakia
bd68714f51 fix(utils.py): map cohere timeout error 2024-07-31 15:15:18 -07:00
Krrish Dholakia
c65a438de2 fix(utils.py): fix linting errors 2024-07-30 18:38:10 -07:00
Krrish Dholakia
46634af06f fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
2024-07-30 18:15:00 -07:00
Krrish Dholakia
185a6857f9 fix(utils.py): fix cost tracking for vertex ai partner models 2024-07-30 14:20:52 -07:00
Krish Dholakia
6bdb9203b3
Merge pull request #4948 from dleen/response
fixes: #4947 Bedrock context exception does not have a response
2024-07-29 15:03:40 -07:00
David Leen
ebca13b2ca fixes: #4947 Bedrock context exception does not have a response 2024-07-29 14:23:56 -07:00
Krrish Dholakia
ae4bcd8a41 fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
2024-07-29 13:04:41 -07:00
Krrish Dholakia
59384c84a5 fix(utils.py): correctly re-raise azure api connection error
'
2024-07-29 12:28:25 -07:00
Krrish Dholakia
52b1ca292c fix(utils.py): check if tools is iterable before indexing into it
Fixes https://github.com/BerriAI/litellm/issues/4933
2024-07-29 09:01:32 -07:00
Ravi N
b321f2988b Allow zero temperature for Sagemaker models based on config
Since Sagemaker can host any kind of model, some models allow
zero temperature. However, this is not enabled by default and
only allowed based on config
2024-07-28 21:55:53 -04:00
Krrish Dholakia
1ac42d8464 fix: utils.py
fix supported openai params
2024-07-27 22:03:40 -07:00
Krish Dholakia
e3a94ac013
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
2024-07-27 21:51:26 -07:00