Krrish Dholakia
|
cd7a9898e0
|
fix(utils.py): fix message replace
|
2024-08-26 15:43:30 -07:00 |
|
Krrish Dholakia
|
73f8315a77
|
fix(utils.py): support 'PERPLEXITY_API_KEY' in env
|
2024-08-26 13:59:57 -07:00 |
|
Krish Dholakia
|
bc2a96b2a5
|
Merge branch 'main' into litellm_vertex_migration
|
2024-08-24 18:24:19 -07:00 |
|
Krrish Dholakia
|
e51ac377cc
|
fix(utils.py): support passing response_format for together ai calls
|
2024-08-23 21:31:59 -07:00 |
|
Krish Dholakia
|
5eba49c112
|
Merge pull request #5343 from BerriAI/litellm_sagemaker_chat
feat(sagemaker.py): add sagemaker messages api support
|
2024-08-23 21:00:00 -07:00 |
|
Krrish Dholakia
|
425d8711b3
|
fix(utils.py): only filter additional properties if gemini/vertex ai
|
2024-08-23 14:22:59 -07:00 |
|
Krrish Dholakia
|
f7aa787fe6
|
feat(sagemaker.py): add sagemaker messages api support
Closes https://github.com/BerriAI/litellm/issues/2641
Closes https://github.com/BerriAI/litellm/pull/5178
|
2024-08-23 10:31:35 -07:00 |
|
Krrish Dholakia
|
2a6aa6da7a
|
fix(utils.py): handle additionalProperties is False for vertex ai / gemini calls
Fixes https://github.com/BerriAI/litellm/issues/5338
Also adds together ai json mode support
|
2024-08-23 09:21:32 -07:00 |
|
Ishaan Jaff
|
2864d16fa1
|
Merge branch 'main' into litellm_allow_using_azure_ad_token_auth
|
2024-08-22 18:21:24 -07:00 |
|
Krrish Dholakia
|
2c5fc1ffb4
|
docs(utils.py): cleanup docstring
|
2024-08-22 11:05:25 -07:00 |
|
Krrish Dholakia
|
900d8ecbf0
|
feat(factory.py): enable 'user_continue_message' for interweaving user/assistant messages when provider requires it
allows bedrock to be used with autogen
|
2024-08-22 11:03:33 -07:00 |
|
Ishaan Jaff
|
26354fbb9d
|
add azure_ad_token_provider as all litellm params
|
2024-08-22 10:59:18 -07:00 |
|
Krrish Dholakia
|
8f306f8e41
|
fix(cohere_chat.py): support passing 'extra_headers'
Fixes https://github.com/BerriAI/litellm/issues/4709
|
2024-08-22 10:17:36 -07:00 |
|
Krrish Dholakia
|
d87e8f5b30
|
feat(utils.py): support global vertex ai safety settings param
|
2024-08-21 17:37:50 -07:00 |
|
Ishaan Jaff
|
e4fe5924a5
|
Merge pull request #5326 from BerriAI/litellm_Add_vertex_multimodal_embedding
[Feat] add vertex multimodal embedding support
|
2024-08-21 17:06:43 -07:00 |
|
Ishaan Jaff
|
0435101df4
|
add multi modal vtx embedding
|
2024-08-21 15:05:59 -07:00 |
|
Krrish Dholakia
|
ac5c6c8751
|
fix(litellm_pre_call_utils.py): handle dynamic keys via api correctly
|
2024-08-21 13:37:21 -07:00 |
|
Ishaan Jaff
|
a34aeafdb5
|
Merge pull request #5018 from haadirakhangi/main
Qdrant Semantic Caching
|
2024-08-21 08:50:43 -07:00 |
|
Krrish Dholakia
|
a06d9d44a9
|
fix(utils.py): support openrouter streaming
Fixes https://github.com/BerriAI/litellm/issues/5080
|
2024-08-21 08:48:58 -07:00 |
|
Krrish Dholakia
|
0091f64ff1
|
fix(utils.py): ensure consistent cost calc b/w returned header and logged object
|
2024-08-20 19:01:20 -07:00 |
|
Krish Dholakia
|
e49e454929
|
Merge pull request #5287 from BerriAI/litellm_fix_response_cost_cal
fix(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-20 11:42:48 -07:00 |
|
Krish Dholakia
|
969b724615
|
Merge pull request #5296 from BerriAI/litellm_azure_json_schema_support
feat(azure.py): support 'json_schema' for older models
|
2024-08-20 11:41:38 -07:00 |
|
Krrish Dholakia
|
cf1a1605a6
|
feat(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-19 16:05:49 -07:00 |
|
Krrish Dholakia
|
c5d1899940
|
feat(Support-pass-through-for-bedrock-endpoints): Allows pass-through support for bedrock endpoints
|
2024-08-17 17:57:43 -07:00 |
|
Krrish Dholakia
|
afcebac8ed
|
feat(azure.py): support 'json_schema' for older models
Converts the json schema input to a tool call, allows the call to still work on older azure models
|
2024-08-17 16:31:13 -07:00 |
|
Krish Dholakia
|
0916197c9d
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Krish Dholakia
|
1844e01133
|
Merge pull request #5251 from Manouchehri/oidc-improvements-20240816
(oidc): Add support for loading tokens via a file, env var, and path in env var
|
2024-08-16 19:15:31 -07:00 |
|
Krish Dholakia
|
6fe21d6dd4
|
Merge pull request #5255 from BerriAI/litellm_fix_token_counter
fix(utils.py): fix get_image_dimensions to handle more image types
|
2024-08-16 17:27:27 -07:00 |
|
Ishaan Jaff
|
937471223a
|
fix databricks streaming test
|
2024-08-16 16:56:08 -07:00 |
|
Ishaan Jaff
|
6de7785442
|
Merge pull request #5243 from BerriAI/litellm_add_bedrock_traces_in_response
[Feat] Add bedrock Guardrail `traces ` in response when trace=enabled
|
2024-08-16 14:49:20 -07:00 |
|
David Manouchehri
|
f24e986534
|
(oidc): Add support for loading tokens via a file, environment variable, and from a file path set in an env var.
|
2024-08-16 20:13:07 +00:00 |
|
Krrish Dholakia
|
3e42ee1bbb
|
fix(utils.py): fix get_image_dimensions to handle more image types
Fixes https://github.com/BerriAI/litellm/issues/5205
|
2024-08-16 12:00:04 -07:00 |
|
Ishaan Jaff
|
262bf14917
|
return traces in bedrock guardrails when enabled
|
2024-08-16 11:35:43 -07:00 |
|
Krrish Dholakia
|
2874b94fb1
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Ishaan Jaff
|
98c9191f84
|
pass trace through for bedrock guardrails
|
2024-08-16 09:10:56 -07:00 |
|
Ishaan Jaff
|
15334cfae3
|
Merge pull request #5173 from gitravin/rn/sagemaker-zero-temp
Allow zero temperature for Sagemaker models based on config
|
2024-08-16 08:45:44 -07:00 |
|
Ishaan Jaff
|
953a67ba4c
|
refactor sagemaker to be async
|
2024-08-15 18:18:02 -07:00 |
|
Krrish Dholakia
|
43b90c0b86
|
fix(utils.py): fix is_azure_openai_model helper function
|
2024-08-14 14:04:39 -07:00 |
|
Krrish Dholakia
|
3026e69926
|
fix(utils.py): support calling openai models via azure_ai/
|
2024-08-14 13:41:04 -07:00 |
|
Krrish Dholakia
|
ec3bf3eda6
|
fix(utils.py): ignore none chunk in stream infinite loop check
Fixes https://github.com/BerriAI/litellm/issues/5158#issuecomment-2287156946
|
2024-08-13 15:06:44 -07:00 |
|
Ravi N
|
7e12f3f02f
|
remove aws_sagemaker_allow_zero_temp from the parameters passed to inference
|
2024-08-12 21:09:50 -04:00 |
|
Krrish Dholakia
|
7620ef0628
|
fix(utils.py): if openai model, don't check hf tokenizers
|
2024-08-12 16:28:22 -07:00 |
|
Krrish Dholakia
|
f4c984878d
|
fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
|
2024-08-12 14:00:43 -07:00 |
|
Krrish Dholakia
|
4e5d5354c2
|
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
|
2024-08-10 14:15:12 -07:00 |
|
Krrish Dholakia
|
5ad72419d2
|
docs(prefix.md): add prefix support to docs
|
2024-08-10 13:55:47 -07:00 |
|
Krrish Dholakia
|
3fd02a1587
|
fix(main.py): safely fail stream_chunk_builder calls
|
2024-08-10 10:22:26 -07:00 |
|
Krrish Dholakia
|
7c8484ac15
|
fix(utils.py): handle anthropic overloaded error
|
2024-08-08 17:18:19 -07:00 |
|
Ishaan Jaff
|
c6799e8aad
|
fix use get_file_check_sum
|
2024-08-08 08:03:08 -07:00 |
|
Krrish Dholakia
|
94fb5c093e
|
fix(vertex_ai_partner.py): pass model for llama3 param mapping
|
2024-08-07 18:07:14 -07:00 |
|
Krrish Dholakia
|
82eb418c86
|
fix(utils.py): fix linting error for python3.8
|
2024-08-07 13:14:29 -07:00 |
|