Commit graph

1954 commits

Author SHA1 Message Date
Krrish Dholakia
663a0c1b83 feat(Support-pass-through-for-bedrock-endpoints): Allows pass-through support for bedrock endpoints 2024-08-17 17:57:43 -07:00
Krrish Dholakia
7ec7c9970b feat(azure.py): support 'json_schema' for older models
Converts the json schema input to a tool call, allows the call to still work on older azure models
2024-08-17 16:31:13 -07:00
Krish Dholakia
a8dd2b6910
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 19:16:20 -07:00
Krish Dholakia
6b1be4783a
Merge pull request #5251 from Manouchehri/oidc-improvements-20240816
(oidc): Add support for loading tokens via a file, env var, and path in env var
2024-08-16 19:15:31 -07:00
Krish Dholakia
6cf8c47366
Merge pull request #5255 from BerriAI/litellm_fix_token_counter
fix(utils.py): fix get_image_dimensions to handle more image types
2024-08-16 17:27:27 -07:00
Ishaan Jaff
51da6ab64e fix databricks streaming test 2024-08-16 16:56:08 -07:00
Ishaan Jaff
cff01b2de3
Merge pull request #5243 from BerriAI/litellm_add_bedrock_traces_in_response
[Feat] Add bedrock Guardrail `traces ` in response when trace=enabled
2024-08-16 14:49:20 -07:00
David Manouchehri
11668c31c1
(oidc): Add support for loading tokens via a file, environment variable, and from a file path set in an env var. 2024-08-16 20:13:07 +00:00
Krrish Dholakia
7129e93992 fix(utils.py): fix get_image_dimensions to handle more image types
Fixes https://github.com/BerriAI/litellm/issues/5205
2024-08-16 12:00:04 -07:00
Ishaan Jaff
9851fa7b1b return traces in bedrock guardrails when enabled 2024-08-16 11:35:43 -07:00
Krrish Dholakia
61f4b71ef7 refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
Ishaan Jaff
89ba7b3e11 pass trace through for bedrock guardrails 2024-08-16 09:10:56 -07:00
Ishaan Jaff
374a46d924
Merge pull request #5173 from gitravin/rn/sagemaker-zero-temp
Allow zero temperature for Sagemaker models based on config
2024-08-16 08:45:44 -07:00
Ishaan Jaff
df4ea8fba6 refactor sagemaker to be async 2024-08-15 18:18:02 -07:00
Krrish Dholakia
1e78b3bf54 fix(utils.py): fix is_azure_openai_model helper function 2024-08-14 14:04:39 -07:00
Krrish Dholakia
583a3b330d fix(utils.py): support calling openai models via azure_ai/ 2024-08-14 13:41:04 -07:00
Krrish Dholakia
3a1b3227d8 fix(utils.py): ignore none chunk in stream infinite loop check
Fixes https://github.com/BerriAI/litellm/issues/5158#issuecomment-2287156946
2024-08-13 15:06:44 -07:00
Ravi N
97cf32630d remove aws_sagemaker_allow_zero_temp from the parameters passed to inference 2024-08-12 21:09:50 -04:00
Krrish Dholakia
a8644d8a7d fix(utils.py): if openai model, don't check hf tokenizers 2024-08-12 16:28:22 -07:00
Krrish Dholakia
fdd9a07051 fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
2024-08-12 14:00:43 -07:00
Krrish Dholakia
19bb95f781 build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
2024-08-10 14:15:12 -07:00
Krrish Dholakia
0ea056971c docs(prefix.md): add prefix support to docs 2024-08-10 13:55:47 -07:00
Krrish Dholakia
068ee12c30 fix(main.py): safely fail stream_chunk_builder calls 2024-08-10 10:22:26 -07:00
Krrish Dholakia
76785cfb6a fix(utils.py): handle anthropic overloaded error 2024-08-08 17:18:19 -07:00
Ishaan Jaff
68a36600c2 fix use get_file_check_sum 2024-08-08 08:03:08 -07:00
Krrish Dholakia
a15317a377 fix(vertex_ai_partner.py): pass model for llama3 param mapping 2024-08-07 18:07:14 -07:00
Krrish Dholakia
37dc359efb fix(utils.py): fix linting error for python3.8 2024-08-07 13:14:29 -07:00
Krish Dholakia
3605e873a1
Merge branch 'main' into litellm_add_pydantic_model_support 2024-08-07 13:07:46 -07:00
Krrish Dholakia
ff386f6b60 fix(utils.py): support deepseek tool calling
Fixes https://github.com/BerriAI/litellm/issues/5081
2024-08-07 11:14:05 -07:00
Krrish Dholakia
2dd27a4e12 feat(utils.py): support validating json schema client-side if user opts in 2024-08-06 19:35:33 -07:00
Krrish Dholakia
5dfde2ee0b feat: Translate openai 'response_format' json_schema to 'response_schema' for vertex ai + google ai studio
Closes https://github.com/BerriAI/litellm/issues/5074
2024-08-06 19:06:14 -07:00
Krrish Dholakia
9cf3d5f568 feat(utils.py): support passing response_format as pydantic model
Related issue - https://github.com/BerriAI/litellm/issues/5074
2024-08-06 18:16:07 -07:00
Ishaan Jaff
aa06df4101 use file size _ name to get file check sum 2024-08-06 15:18:50 -07:00
Ishaan Jaff
c19066e78e use file_checksum 2024-08-06 13:55:22 -07:00
Krrish Dholakia
34213edb91 fix(utils.py): fix dynamic api base 2024-08-06 11:27:39 -07:00
Krrish Dholakia
511f4d33d1 feat(utils.py): check env var for api base for openai-compatible endpoints
Closes https://github.com/BerriAI/litellm/issues/5066
2024-08-06 08:32:44 -07:00
Krrish Dholakia
3c4c78a71f feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
2024-08-05 11:18:59 -07:00
Krrish Dholakia
ed8b20fa18 fix(utils.py): parse out aws specific params from openai call
Fixes https://github.com/BerriAI/litellm/issues/5009
2024-08-03 12:04:44 -07:00
Krrish Dholakia
4258295a07 feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
2024-08-03 09:11:22 -07:00
Krish Dholakia
5f13d2ee64
Merge pull request #5029 from BerriAI/litellm_azure_ui_fix
fix(utils.py): Fix adding azure models on ui
2024-08-02 22:12:19 -07:00
Krrish Dholakia
5d96ff6694 fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
2024-08-02 17:48:53 -07:00
Ishaan Jaff
7ec1f241fc
Merge pull request #5026 from BerriAI/litellm_fix_whisper_caching
[Fix] Whisper Caching - Use correct cache keys for checking request in cache
2024-08-02 17:26:28 -07:00
Ishaan Jaff
ec3b0d0d0b return cache hit True on cache hits 2024-08-02 15:07:05 -07:00
Ishaan Jaff
1b3bc32090 log correct file name on langfuse 2024-08-02 14:49:25 -07:00
Krrish Dholakia
0a30ba9674 fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
2024-08-02 09:30:50 -07:00
Haadi Rakhangi
5439e72a6b
Merge branch 'BerriAI:main' into main 2024-08-02 21:08:48 +05:30
Haadi Rakhangi
851db5ecea qdrant semantic caching added 2024-08-02 21:07:19 +05:30
Krrish Dholakia
fe7e68adc8 fix(utils.py): fix codestral streaming 2024-08-02 07:38:06 -07:00
Krrish Dholakia
4c2ef8ea64 fix(bedrock_httpx.py): fix ai21 streaming 2024-08-01 22:03:24 -07:00
Krish Dholakia
25ac9c2d75
Merge branch 'main' into litellm_fix_streaming_usage_calc 2024-08-01 21:29:04 -07:00