Krrish Dholakia
|
64b6ee9a53
|
refactor(utils.py): add more clear error logging
|
2024-06-13 11:49:42 -07:00 |
|
Krrish Dholakia
|
8d8f6017d9
|
fix(utils.py): log cache hit as INFO message
|
2024-06-13 11:42:16 -07:00 |
|
Krish Dholakia
|
50c74fce49
|
Merge branch 'main' into litellm_vertex_completion_httpx
|
2024-06-12 21:19:22 -07:00 |
|
Krrish Dholakia
|
e60b0e96e4
|
fix(vertex_httpx.py): add function calling support to httpx route
|
2024-06-12 21:11:00 -07:00 |
|
Ishaan Jaff
|
994b88118b
|
feat - add azure ai studio models on litellm ui
|
2024-06-12 20:28:16 -07:00 |
|
Krrish Dholakia
|
1dac2aa59f
|
fix(vertex_httpx.py): support streaming via httpx client
|
2024-06-12 19:55:14 -07:00 |
|
Krrish Dholakia
|
29169b3039
|
feat(vertex_httpx.py): Moving to call vertex ai via httpx (instead of their sdk). Allows us to support all their api updates.
|
2024-06-12 16:47:00 -07:00 |
|
Ishaan Jaff
|
dbdf102a01
|
feat - add mistral embedding config
|
2024-06-12 15:00:00 -07:00 |
|
Ishaan Jaff
|
4d30182720
|
Merge pull request #4152 from BerriAI/litellm_support_vertex_text_input
[Feat] Support `task_type`, `auto_truncate` params
|
2024-06-12 13:25:45 -07:00 |
|
Krish Dholakia
|
58fa6e0cc8
|
Merge pull request #3861 from Manouchehri/aks-oidc-1852
feat(util.py/azure.py): Add OIDC support when running LiteLLM on Azure + Azure Upstream caching
|
2024-06-12 12:47:08 -07:00 |
|
Ishaan Jaff
|
e4b36d71cf
|
feat - support vertex ai dimensions
|
2024-06-12 09:29:51 -07:00 |
|
Ishaan Jaff
|
2622f33bbd
|
ci/cd fix predibase 500 errors
|
2024-06-11 23:15:48 -07:00 |
|
Krish Dholakia
|
77332ced58
|
Merge pull request #4137 from jamesbraza/custom-llm-provider
Allowing inference of LLM provider in `get_supported_openai_params`
|
2024-06-11 18:38:42 -07:00 |
|
James Braza
|
cab0e0e703
|
Added handling of unmapped provider, with test
|
2024-06-11 18:34:10 -07:00 |
|
Krish Dholakia
|
83114ef714
|
Merge pull request #4119 from BerriAI/litellm_tiktoken_bump
feat(utils.py): bump tiktoken dependency to 0.7.0 (gpt-4o token counting support)
|
2024-06-11 18:24:58 -07:00 |
|
James Braza
|
f33cb2fbaa
|
Allowing inferring custom LLM provider from model inside get_supported_openai_params
|
2024-06-11 18:16:19 -07:00 |
|
Krrish Dholakia
|
a0ee9ba78e
|
fix(utils.py): support dynamic api key for azure_ai route
|
2024-06-11 17:51:29 -07:00 |
|
Krrish Dholakia
|
caae69c18f
|
fix(utils.py): fix formatting
|
2024-06-11 15:49:20 -07:00 |
|
Krrish Dholakia
|
4a27a50f9b
|
fix(utils.py): add new 'azure_ai/' route
supports azure's openai compatible api endpoint
|
2024-06-11 14:06:56 -07:00 |
|
Krrish Dholakia
|
e7967eb763
|
fix(utils.py): allow user to opt in to raw request logging to langfuse
|
2024-06-11 13:35:22 -07:00 |
|
David Manouchehri
|
41b6c58ddc
|
feat(util.py/azure.py): Add OIDC support when running in Azure Kubernetes Service (AKS).
|
2024-06-11 15:54:34 +00:00 |
|
Krrish Dholakia
|
b75414362b
|
fix(utils.py): exception map vertex ai 500 internal server errors
|
2024-06-10 21:37:54 -07:00 |
|
Krrish Dholakia
|
74a27df9ba
|
feat(utils.py): bump tiktoken dependency to 0.7.0
adds support for gpt-4o token counting
|
2024-06-10 21:21:23 -07:00 |
|
Krish Dholakia
|
3a31e8011a
|
Merge pull request #4106 from BerriAI/litellm_anthropic_bedrock_tool_calling_fix
fix(bedrock_httpx.py): fix tool calling for anthropic bedrock calls w/ streaming
|
2024-06-10 20:21:16 -07:00 |
|
Krrish Dholakia
|
5056fd5778
|
fix(bedrock_httpx.py): returning correct finish reason on streaming completion
|
2024-06-10 14:47:49 -07:00 |
|
Krrish Dholakia
|
2d95eaa5bc
|
fix(bedrock_httpx.py): fix tool calling for anthropic bedrock calls w/ streaming
Fixes https://github.com/BerriAI/litellm/issues/4091
|
2024-06-10 14:20:25 -07:00 |
|
Ishaan Jaff
|
ef9349e6a2
|
Merge pull request #4086 from BerriAI/litellm_sdk_tool_calling_fic
[Fix] Litellm sdk - allow ChatCompletionMessageToolCall, and Function to be used as dict
|
2024-06-08 20:48:54 -07:00 |
|
Krish Dholakia
|
3be558c4bb
|
Merge pull request #4080 from BerriAI/litellm_predibase_exception_mapping
fix(utils.py): improved predibase exception mapping
|
2024-06-08 20:27:44 -07:00 |
|
Ishaan Jaff
|
af61eff8e3
|
feat - allow ChatCompletionMessageToolCall, and Function to be used as dict
|
2024-06-08 19:47:31 -07:00 |
|
Krrish Dholakia
|
0a886eed6a
|
fix(cost_calculator.py): fixes tgai unmapped model pricing
Fixes error where tgai helper function returned None. Enforces stronger type hints, refactors code, adds more unit testing.
|
2024-06-08 19:43:57 -07:00 |
|
Krrish Dholakia
|
39ee6be477
|
fix(utils.py): improved predibase exception mapping
adds unit testing + better coverage for predibase errors
|
2024-06-08 14:32:43 -07:00 |
|
Krrish Dholakia
|
192dfbcd63
|
fix(utils.py): fix helicone success logging integration
Fixes https://github.com/BerriAI/litellm/issues/4062
|
2024-06-08 08:59:56 -07:00 |
|
Ishaan Jaff
|
db0cc83ed5
|
fix - vertex ai exception mapping
|
2024-06-07 18:16:26 -07:00 |
|
Ishaan Jaff
|
8958dff9d0
|
fix vertex ai exceptions
|
2024-06-07 17:13:32 -07:00 |
|
Ishaan Jaff
|
92841dfe1b
|
Merge branch 'main' into litellm_security_fix
|
2024-06-07 16:52:25 -07:00 |
|
Krrish Dholakia
|
b16666b5dc
|
fix(utils.py): fix vertex ai exception mapping
|
2024-06-07 16:06:31 -07:00 |
|
Krrish Dholakia
|
de98bd939c
|
fix(test_custom_callbacks_input.py): unit tests for 'turn_off_message_logging'
ensure no raw request is logged either
|
2024-06-07 15:39:15 -07:00 |
|
Ishaan Jaff
|
80def35a04
|
Merge pull request #4065 from BerriAI/litellm_use_common_func
[Refactor] - Refactor proxy_server.py to use common function for `add_litellm_data_to_request`
|
2024-06-07 14:02:17 -07:00 |
|
Ishaan Jaff
|
860c9b52b6
|
Merge branch 'main' into litellm_svc_logger
|
2024-06-07 14:01:54 -07:00 |
|
Ishaan Jaff
|
8106a6dc9b
|
fix simplify - pass litellm_parent_otel_span
|
2024-06-07 13:48:21 -07:00 |
|
Ishaan Jaff
|
c9c25a8ede
|
Merge branch 'main' into litellm_redact_messages_slack_alerting
|
2024-06-07 12:43:53 -07:00 |
|
Krrish Dholakia
|
672dcf0c6f
|
fix(factory.py): handle bedrock claude image url's
|
2024-06-07 10:04:03 -07:00 |
|
Ishaan Jaff
|
0f99d47d87
|
use litellm_parent_otel_span as litellm_param
|
2024-06-07 08:54:28 -07:00 |
|
Krish Dholakia
|
7bf5c61007
|
Merge branch 'main' into litellm_bedrock_converse_api
|
2024-06-07 08:49:52 -07:00 |
|
Krish Dholakia
|
2c70e55f77
|
Merge pull request #4054 from BerriAI/litellm_aws_kms_support
feat(aws_secret_manager.py): Support AWS KMS for Master Key encrption
|
2024-06-07 08:49:25 -07:00 |
|
Krrish Dholakia
|
12ed3dc911
|
refactor(main.py): only route anthropic calls through converse api
v0 scope let's move function calling to converse api
|
2024-06-07 08:47:51 -07:00 |
|
Krish Dholakia
|
9d81f1cc52
|
Merge pull request #4049 from BerriAI/litellm_cleanup_traceback
refactor: replace 'traceback.print_exc()' with logging library
|
2024-06-07 08:03:22 -07:00 |
|
Krish Dholakia
|
d3d8f97bca
|
Merge branch 'main' into litellm_aws_kms_support
|
2024-06-07 07:58:56 -07:00 |
|
Krrish Dholakia
|
c41b60f6bf
|
feat(bedrock_httpx.py): working bedrock converse api streaming
|
2024-06-06 22:13:21 -07:00 |
|
Krrish Dholakia
|
f8b5aa3df6
|
fix(bedrock_httpx.py): working claude 3 function calling
|
2024-06-06 20:12:41 -07:00 |
|