Krrish Dholakia
|
3e42ee1bbb
|
fix(utils.py): fix get_image_dimensions to handle more image types
Fixes https://github.com/BerriAI/litellm/issues/5205
|
2024-08-16 12:00:04 -07:00 |
|
Ishaan Jaff
|
262bf14917
|
return traces in bedrock guardrails when enabled
|
2024-08-16 11:35:43 -07:00 |
|
Krrish Dholakia
|
2874b94fb1
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Ishaan Jaff
|
98c9191f84
|
pass trace through for bedrock guardrails
|
2024-08-16 09:10:56 -07:00 |
|
Ishaan Jaff
|
15334cfae3
|
Merge pull request #5173 from gitravin/rn/sagemaker-zero-temp
Allow zero temperature for Sagemaker models based on config
|
2024-08-16 08:45:44 -07:00 |
|
Ishaan Jaff
|
953a67ba4c
|
refactor sagemaker to be async
|
2024-08-15 18:18:02 -07:00 |
|
Krrish Dholakia
|
43b90c0b86
|
fix(utils.py): fix is_azure_openai_model helper function
|
2024-08-14 14:04:39 -07:00 |
|
Krrish Dholakia
|
3026e69926
|
fix(utils.py): support calling openai models via azure_ai/
|
2024-08-14 13:41:04 -07:00 |
|
Krrish Dholakia
|
ec3bf3eda6
|
fix(utils.py): ignore none chunk in stream infinite loop check
Fixes https://github.com/BerriAI/litellm/issues/5158#issuecomment-2287156946
|
2024-08-13 15:06:44 -07:00 |
|
Ravi N
|
7e12f3f02f
|
remove aws_sagemaker_allow_zero_temp from the parameters passed to inference
|
2024-08-12 21:09:50 -04:00 |
|
Krrish Dholakia
|
7620ef0628
|
fix(utils.py): if openai model, don't check hf tokenizers
|
2024-08-12 16:28:22 -07:00 |
|
Krrish Dholakia
|
f4c984878d
|
fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
|
2024-08-12 14:00:43 -07:00 |
|
Krrish Dholakia
|
4e5d5354c2
|
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
|
2024-08-10 14:15:12 -07:00 |
|
Krrish Dholakia
|
5ad72419d2
|
docs(prefix.md): add prefix support to docs
|
2024-08-10 13:55:47 -07:00 |
|
Krrish Dholakia
|
3fd02a1587
|
fix(main.py): safely fail stream_chunk_builder calls
|
2024-08-10 10:22:26 -07:00 |
|
Krrish Dholakia
|
7c8484ac15
|
fix(utils.py): handle anthropic overloaded error
|
2024-08-08 17:18:19 -07:00 |
|
Ishaan Jaff
|
c6799e8aad
|
fix use get_file_check_sum
|
2024-08-08 08:03:08 -07:00 |
|
Krrish Dholakia
|
94fb5c093e
|
fix(vertex_ai_partner.py): pass model for llama3 param mapping
|
2024-08-07 18:07:14 -07:00 |
|
Krrish Dholakia
|
82eb418c86
|
fix(utils.py): fix linting error for python3.8
|
2024-08-07 13:14:29 -07:00 |
|
Krish Dholakia
|
77a33baabb
|
Merge branch 'main' into litellm_add_pydantic_model_support
|
2024-08-07 13:07:46 -07:00 |
|
Krrish Dholakia
|
788b06a33c
|
fix(utils.py): support deepseek tool calling
Fixes https://github.com/BerriAI/litellm/issues/5081
|
2024-08-07 11:14:05 -07:00 |
|
Krrish Dholakia
|
8b028d41aa
|
feat(utils.py): support validating json schema client-side if user opts in
|
2024-08-06 19:35:33 -07:00 |
|
Krrish Dholakia
|
831dc1b886
|
feat: Translate openai 'response_format' json_schema to 'response_schema' for vertex ai + google ai studio
Closes https://github.com/BerriAI/litellm/issues/5074
|
2024-08-06 19:06:14 -07:00 |
|
Krrish Dholakia
|
2b132c6bef
|
feat(utils.py): support passing response_format as pydantic model
Related issue - https://github.com/BerriAI/litellm/issues/5074
|
2024-08-06 18:16:07 -07:00 |
|
Ishaan Jaff
|
ff9fe47989
|
use file size _ name to get file check sum
|
2024-08-06 15:18:50 -07:00 |
|
Ishaan Jaff
|
75cbdda714
|
use file_checksum
|
2024-08-06 13:55:22 -07:00 |
|
Krrish Dholakia
|
152e7ebc51
|
fix(utils.py): fix dynamic api base
|
2024-08-06 11:27:39 -07:00 |
|
Krrish Dholakia
|
3a3381387f
|
feat(utils.py): check env var for api base for openai-compatible endpoints
Closes https://github.com/BerriAI/litellm/issues/5066
|
2024-08-06 08:32:44 -07:00 |
|
Krrish Dholakia
|
8500f6d087
|
feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
|
2024-08-05 11:18:59 -07:00 |
|
Krrish Dholakia
|
bbc56f7202
|
fix(utils.py): parse out aws specific params from openai call
Fixes https://github.com/BerriAI/litellm/issues/5009
|
2024-08-03 12:04:44 -07:00 |
|
Krrish Dholakia
|
acbc2917b8
|
feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
|
2024-08-03 09:11:22 -07:00 |
|
Krish Dholakia
|
95175b0b34
|
Merge pull request #5029 from BerriAI/litellm_azure_ui_fix
fix(utils.py): Fix adding azure models on ui
|
2024-08-02 22:12:19 -07:00 |
|
Krrish Dholakia
|
e6bc7e938a
|
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
|
2024-08-02 17:48:53 -07:00 |
|
Ishaan Jaff
|
954dd95bdb
|
Merge pull request #5026 from BerriAI/litellm_fix_whisper_caching
[Fix] Whisper Caching - Use correct cache keys for checking request in cache
|
2024-08-02 17:26:28 -07:00 |
|
Ishaan Jaff
|
8074d0d3f8
|
return cache hit True on cache hits
|
2024-08-02 15:07:05 -07:00 |
|
Ishaan Jaff
|
f5ec25248a
|
log correct file name on langfuse
|
2024-08-02 14:49:25 -07:00 |
|
Krrish Dholakia
|
c1513bfe42
|
fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
|
2024-08-02 09:30:50 -07:00 |
|
Haadi Rakhangi
|
1adf7bc909
|
Merge branch 'BerriAI:main' into main
|
2024-08-02 21:08:48 +05:30 |
|
Haadi Rakhangi
|
a047df3825
|
qdrant semantic caching added
|
2024-08-02 21:07:19 +05:30 |
|
Krrish Dholakia
|
8204037975
|
fix(utils.py): fix codestral streaming
|
2024-08-02 07:38:06 -07:00 |
|
Krrish Dholakia
|
70afbafd94
|
fix(bedrock_httpx.py): fix ai21 streaming
|
2024-08-01 22:03:24 -07:00 |
|
Krish Dholakia
|
0fc50a69ee
|
Merge branch 'main' into litellm_fix_streaming_usage_calc
|
2024-08-01 21:29:04 -07:00 |
|
Krish Dholakia
|
375a4049aa
|
Merge branch 'main' into litellm_response_cost_logging
|
2024-08-01 21:28:22 -07:00 |
|
Krrish Dholakia
|
cb9b19e887
|
feat(vertex_ai_partner.py): add vertex ai codestral FIM support
Closes https://github.com/BerriAI/litellm/issues/4984
|
2024-08-01 17:10:27 -07:00 |
|
Krrish Dholakia
|
71aada78d6
|
fix(utils.py): fix togetherai streaming cost calculation
|
2024-08-01 15:03:08 -07:00 |
|
Krrish Dholakia
|
a502914f13
|
fix(utils.py): fix anthropic streaming usage calculation
Fixes https://github.com/BerriAI/litellm/issues/4965
|
2024-08-01 14:45:54 -07:00 |
|
Krrish Dholakia
|
08541d056c
|
fix(litellm_logging.py): use 1 cost calc function across response headers + logging integrations
Ensures consistent cost calculation when azure base models are used
|
2024-08-01 10:26:59 -07:00 |
|
Krrish Dholakia
|
ac6bca2320
|
fix(utils.py): fix special keys list for provider-specific items in response object
|
2024-07-31 18:30:49 -07:00 |
|
Krrish Dholakia
|
1206b7626a
|
fix(utils.py): return additional kwargs from openai-like response body
Closes https://github.com/BerriAI/litellm/issues/4981
|
2024-07-31 15:37:03 -07:00 |
|
Krrish Dholakia
|
b95feb16f1
|
fix(utils.py): map cohere timeout error
|
2024-07-31 15:15:18 -07:00 |
|