Ishaan Jaff
|
6bd6c956a5
|
use correct vtx ai21 pricing
|
2024-08-29 19:04:05 -07:00 |
|
Krish Dholakia
|
8ce1e49fbe
|
fix(utils.py): correctly log streaming cache hits (#5417) (#5426)
Fixes https://github.com/BerriAI/litellm/issues/5401
|
2024-08-28 22:50:33 -07:00 |
|
Ishaan Jaff
|
11c175a215
|
refactor partner models to include ai21
|
2024-08-27 13:35:22 -07:00 |
|
Krish Dholakia
|
415abc86c6
|
Merge pull request #5358 from BerriAI/litellm_fix_retry_after
fix retry after - cooldown individual models based on their specific 'retry-after' header
|
2024-08-27 11:50:14 -07:00 |
|
Krrish Dholakia
|
18b67a455e
|
test: fix test
|
2024-08-27 10:46:57 -07:00 |
|
Krrish Dholakia
|
5aad9d2db7
|
fix: fix imports
|
2024-08-26 22:19:01 -07:00 |
|
Krrish Dholakia
|
0eea01dae9
|
feat(vertex_ai_context_caching.py): check gemini cache, if key already exists
|
2024-08-26 22:19:01 -07:00 |
|
Krrish Dholakia
|
b0cc1df2d6
|
feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format)
Closes https://github.com/BerriAI/litellm/issues/5213
|
2024-08-26 22:19:01 -07:00 |
|
Krish Dholakia
|
c503ff435e
|
Merge pull request #5368 from BerriAI/litellm_vertex_function_support
feat(vertex_httpx.py): support 'functions' param for gemini google ai studio + vertex ai
|
2024-08-26 22:11:42 -07:00 |
|
Krrish Dholakia
|
0a15d3b3c3
|
fix(utils.py): fix message replace
|
2024-08-26 15:43:30 -07:00 |
|
Krrish Dholakia
|
174b1c43e3
|
fix(utils.py): support 'PERPLEXITY_API_KEY' in env
|
2024-08-26 13:59:57 -07:00 |
|
Krrish Dholakia
|
1cbf851ac2
|
fix(utils.py): fix value check
|
2024-08-26 12:04:56 -07:00 |
|
Krrish Dholakia
|
b9d1296319
|
feat(utils.py): support gemini/vertex ai streaming function param usage
|
2024-08-26 11:23:45 -07:00 |
|
Krish Dholakia
|
f27abe0462
|
Merge branch 'main' into litellm_vertex_migration
|
2024-08-24 18:24:19 -07:00 |
|
Krrish Dholakia
|
756a828c15
|
fix(azure.py): add response header coverage for azure models
|
2024-08-24 15:12:51 -07:00 |
|
Krrish Dholakia
|
87549a2391
|
fix(main.py): cover openai /v1/completions endpoint
|
2024-08-24 13:25:17 -07:00 |
|
Krrish Dholakia
|
068aafdff9
|
fix(utils.py): correctly re-raise the headers from an exception, if present
Fixes issue where retry after on router was not using azure / openai numbers
|
2024-08-24 12:30:30 -07:00 |
|
Krrish Dholakia
|
6d2ae5a0d8
|
fix(utils.py): support passing response_format for together ai calls
|
2024-08-23 21:31:59 -07:00 |
|
Krish Dholakia
|
cd61ddc610
|
Merge pull request #5343 from BerriAI/litellm_sagemaker_chat
feat(sagemaker.py): add sagemaker messages api support
|
2024-08-23 21:00:00 -07:00 |
|
Krrish Dholakia
|
3007f0344d
|
fix(utils.py): only filter additional properties if gemini/vertex ai
|
2024-08-23 14:22:59 -07:00 |
|
Krrish Dholakia
|
3f116b25a9
|
feat(sagemaker.py): add sagemaker messages api support
Closes https://github.com/BerriAI/litellm/issues/2641
Closes https://github.com/BerriAI/litellm/pull/5178
|
2024-08-23 10:31:35 -07:00 |
|
Krrish Dholakia
|
93ed8c7216
|
fix(utils.py): handle additionalProperties is False for vertex ai / gemini calls
Fixes https://github.com/BerriAI/litellm/issues/5338
Also adds together ai json mode support
|
2024-08-23 09:21:32 -07:00 |
|
Ishaan Jaff
|
228252b92d
|
Merge branch 'main' into litellm_allow_using_azure_ad_token_auth
|
2024-08-22 18:21:24 -07:00 |
|
Krrish Dholakia
|
98f73b35ba
|
docs(utils.py): cleanup docstring
|
2024-08-22 11:05:25 -07:00 |
|
Krrish Dholakia
|
70bf8bd4f4
|
feat(factory.py): enable 'user_continue_message' for interweaving user/assistant messages when provider requires it
allows bedrock to be used with autogen
|
2024-08-22 11:03:33 -07:00 |
|
Ishaan Jaff
|
2a3bc8c190
|
add azure_ad_token_provider as all litellm params
|
2024-08-22 10:59:18 -07:00 |
|
Krrish Dholakia
|
11bfc1dca7
|
fix(cohere_chat.py): support passing 'extra_headers'
Fixes https://github.com/BerriAI/litellm/issues/4709
|
2024-08-22 10:17:36 -07:00 |
|
Krrish Dholakia
|
3c99ad19fa
|
feat(utils.py): support global vertex ai safety settings param
|
2024-08-21 17:37:50 -07:00 |
|
Ishaan Jaff
|
dd524a4f50
|
Merge pull request #5326 from BerriAI/litellm_Add_vertex_multimodal_embedding
[Feat] add vertex multimodal embedding support
|
2024-08-21 17:06:43 -07:00 |
|
Ishaan Jaff
|
35781ab8d5
|
add multi modal vtx embedding
|
2024-08-21 15:05:59 -07:00 |
|
Krrish Dholakia
|
7aec6f0f2a
|
fix(litellm_pre_call_utils.py): handle dynamic keys via api correctly
|
2024-08-21 13:37:21 -07:00 |
|
Ishaan Jaff
|
7d0196191f
|
Merge pull request #5018 from haadirakhangi/main
Qdrant Semantic Caching
|
2024-08-21 08:50:43 -07:00 |
|
Krrish Dholakia
|
1b6db8359a
|
fix(utils.py): support openrouter streaming
Fixes https://github.com/BerriAI/litellm/issues/5080
|
2024-08-21 08:48:58 -07:00 |
|
Krrish Dholakia
|
8e9117f701
|
fix(utils.py): ensure consistent cost calc b/w returned header and logged object
|
2024-08-20 19:01:20 -07:00 |
|
Krish Dholakia
|
f888204a12
|
Merge pull request #5287 from BerriAI/litellm_fix_response_cost_cal
fix(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-20 11:42:48 -07:00 |
|
Krish Dholakia
|
02eb6455b2
|
Merge pull request #5296 from BerriAI/litellm_azure_json_schema_support
feat(azure.py): support 'json_schema' for older models
|
2024-08-20 11:41:38 -07:00 |
|
Krrish Dholakia
|
55217fa8d7
|
feat(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-19 16:05:49 -07:00 |
|
Krrish Dholakia
|
663a0c1b83
|
feat(Support-pass-through-for-bedrock-endpoints): Allows pass-through support for bedrock endpoints
|
2024-08-17 17:57:43 -07:00 |
|
Krrish Dholakia
|
7ec7c9970b
|
feat(azure.py): support 'json_schema' for older models
Converts the json schema input to a tool call, allows the call to still work on older azure models
|
2024-08-17 16:31:13 -07:00 |
|
Krish Dholakia
|
a8dd2b6910
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Krish Dholakia
|
6b1be4783a
|
Merge pull request #5251 from Manouchehri/oidc-improvements-20240816
(oidc): Add support for loading tokens via a file, env var, and path in env var
|
2024-08-16 19:15:31 -07:00 |
|
Krish Dholakia
|
6cf8c47366
|
Merge pull request #5255 from BerriAI/litellm_fix_token_counter
fix(utils.py): fix get_image_dimensions to handle more image types
|
2024-08-16 17:27:27 -07:00 |
|
Ishaan Jaff
|
51da6ab64e
|
fix databricks streaming test
|
2024-08-16 16:56:08 -07:00 |
|
Ishaan Jaff
|
cff01b2de3
|
Merge pull request #5243 from BerriAI/litellm_add_bedrock_traces_in_response
[Feat] Add bedrock Guardrail `traces ` in response when trace=enabled
|
2024-08-16 14:49:20 -07:00 |
|
David Manouchehri
|
11668c31c1
|
(oidc): Add support for loading tokens via a file, environment variable, and from a file path set in an env var.
|
2024-08-16 20:13:07 +00:00 |
|
Krrish Dholakia
|
7129e93992
|
fix(utils.py): fix get_image_dimensions to handle more image types
Fixes https://github.com/BerriAI/litellm/issues/5205
|
2024-08-16 12:00:04 -07:00 |
|
Ishaan Jaff
|
9851fa7b1b
|
return traces in bedrock guardrails when enabled
|
2024-08-16 11:35:43 -07:00 |
|
Krrish Dholakia
|
61f4b71ef7
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Ishaan Jaff
|
89ba7b3e11
|
pass trace through for bedrock guardrails
|
2024-08-16 09:10:56 -07:00 |
|
Ishaan Jaff
|
374a46d924
|
Merge pull request #5173 from gitravin/rn/sagemaker-zero-temp
Allow zero temperature for Sagemaker models based on config
|
2024-08-16 08:45:44 -07:00 |
|