Ishaan Jaff
|
2a5eabcceb
|
feat add text completion config for mistral text
|
2024-06-17 12:48:46 -07:00 |
|
Ishaan Jaff
|
55bfb181a3
|
working chat, text for codestral
|
2024-06-17 11:30:22 -07:00 |
|
Ishaan Jaff
|
1a30068f90
|
vo - init commit adding codestral API
|
2024-06-17 11:05:24 -07:00 |
|
Krrish Dholakia
|
d1ab1c890b
|
docs(utils.py): add comments explaining utils vs. core utils
|
2024-06-15 14:50:05 -07:00 |
|
Krrish Dholakia
|
019533d815
|
fix(utils.py): move 'set_callbacks' to litellm_logging.py
|
2024-06-15 12:02:30 -07:00 |
|
Krrish Dholakia
|
7de77ab677
|
fix(init.py): fix imports
|
2024-06-15 11:31:09 -07:00 |
|
Krrish Dholakia
|
9d7f5d503c
|
refactor(utils.py): refactor Logging to it's own class. Cut down utils.py to <10k lines.
Easier debugging
Reference: https://github.com/BerriAI/litellm/issues/4206
|
2024-06-15 10:57:20 -07:00 |
|
Krrish Dholakia
|
734bd5ef85
|
feat(router.py): support content policy fallbacks
Closes https://github.com/BerriAI/litellm/issues/2632
|
2024-06-14 17:15:44 -07:00 |
|
Krrish Dholakia
|
29e06f4e72
|
fix(utils.py): return traceback on unmapped exception error
Fixes https://github.com/BerriAI/litellm/issues/4201
|
2024-06-14 15:08:01 -07:00 |
|
Krrish Dholakia
|
b580e0992d
|
fix(utils.py): check if model info is for model with correct provider
Fixes issue where incorrect pricing was used for custom llm provider
|
2024-06-13 15:54:24 -07:00 |
|
Ishaan Jaff
|
0ebbad9fa6
|
Merge pull request #4176 from BerriAI/litellm_fix_redacting_msgs
[Fix] Redacting messages from OTEL + Refactor `utils.py` to use `litellm_core_utils`
|
2024-06-13 13:50:13 -07:00 |
|
Ishaan Jaff
|
72cc0618a4
|
fix - fix redacting messages litellm
|
2024-06-13 11:52:20 -07:00 |
|
Krrish Dholakia
|
64b6ee9a53
|
refactor(utils.py): add more clear error logging
|
2024-06-13 11:49:42 -07:00 |
|
Krrish Dholakia
|
8d8f6017d9
|
fix(utils.py): log cache hit as INFO message
|
2024-06-13 11:42:16 -07:00 |
|
Krish Dholakia
|
50c74fce49
|
Merge branch 'main' into litellm_vertex_completion_httpx
|
2024-06-12 21:19:22 -07:00 |
|
Krrish Dholakia
|
e60b0e96e4
|
fix(vertex_httpx.py): add function calling support to httpx route
|
2024-06-12 21:11:00 -07:00 |
|
Ishaan Jaff
|
994b88118b
|
feat - add azure ai studio models on litellm ui
|
2024-06-12 20:28:16 -07:00 |
|
Krrish Dholakia
|
1dac2aa59f
|
fix(vertex_httpx.py): support streaming via httpx client
|
2024-06-12 19:55:14 -07:00 |
|
Krrish Dholakia
|
29169b3039
|
feat(vertex_httpx.py): Moving to call vertex ai via httpx (instead of their sdk). Allows us to support all their api updates.
|
2024-06-12 16:47:00 -07:00 |
|
Ishaan Jaff
|
dbdf102a01
|
feat - add mistral embedding config
|
2024-06-12 15:00:00 -07:00 |
|
Ishaan Jaff
|
4d30182720
|
Merge pull request #4152 from BerriAI/litellm_support_vertex_text_input
[Feat] Support `task_type`, `auto_truncate` params
|
2024-06-12 13:25:45 -07:00 |
|
Krish Dholakia
|
58fa6e0cc8
|
Merge pull request #3861 from Manouchehri/aks-oidc-1852
feat(util.py/azure.py): Add OIDC support when running LiteLLM on Azure + Azure Upstream caching
|
2024-06-12 12:47:08 -07:00 |
|
Ishaan Jaff
|
e4b36d71cf
|
feat - support vertex ai dimensions
|
2024-06-12 09:29:51 -07:00 |
|
Ishaan Jaff
|
2622f33bbd
|
ci/cd fix predibase 500 errors
|
2024-06-11 23:15:48 -07:00 |
|
Krish Dholakia
|
77332ced58
|
Merge pull request #4137 from jamesbraza/custom-llm-provider
Allowing inference of LLM provider in `get_supported_openai_params`
|
2024-06-11 18:38:42 -07:00 |
|
James Braza
|
cab0e0e703
|
Added handling of unmapped provider, with test
|
2024-06-11 18:34:10 -07:00 |
|
Krish Dholakia
|
83114ef714
|
Merge pull request #4119 from BerriAI/litellm_tiktoken_bump
feat(utils.py): bump tiktoken dependency to 0.7.0 (gpt-4o token counting support)
|
2024-06-11 18:24:58 -07:00 |
|
James Braza
|
f33cb2fbaa
|
Allowing inferring custom LLM provider from model inside get_supported_openai_params
|
2024-06-11 18:16:19 -07:00 |
|
Krrish Dholakia
|
a0ee9ba78e
|
fix(utils.py): support dynamic api key for azure_ai route
|
2024-06-11 17:51:29 -07:00 |
|
Krrish Dholakia
|
caae69c18f
|
fix(utils.py): fix formatting
|
2024-06-11 15:49:20 -07:00 |
|
Krrish Dholakia
|
4a27a50f9b
|
fix(utils.py): add new 'azure_ai/' route
supports azure's openai compatible api endpoint
|
2024-06-11 14:06:56 -07:00 |
|
Krrish Dholakia
|
e7967eb763
|
fix(utils.py): allow user to opt in to raw request logging to langfuse
|
2024-06-11 13:35:22 -07:00 |
|
David Manouchehri
|
41b6c58ddc
|
feat(util.py/azure.py): Add OIDC support when running in Azure Kubernetes Service (AKS).
|
2024-06-11 15:54:34 +00:00 |
|
Krrish Dholakia
|
b75414362b
|
fix(utils.py): exception map vertex ai 500 internal server errors
|
2024-06-10 21:37:54 -07:00 |
|
Krrish Dholakia
|
74a27df9ba
|
feat(utils.py): bump tiktoken dependency to 0.7.0
adds support for gpt-4o token counting
|
2024-06-10 21:21:23 -07:00 |
|
Krish Dholakia
|
3a31e8011a
|
Merge pull request #4106 from BerriAI/litellm_anthropic_bedrock_tool_calling_fix
fix(bedrock_httpx.py): fix tool calling for anthropic bedrock calls w/ streaming
|
2024-06-10 20:21:16 -07:00 |
|
Krrish Dholakia
|
5056fd5778
|
fix(bedrock_httpx.py): returning correct finish reason on streaming completion
|
2024-06-10 14:47:49 -07:00 |
|
Krrish Dholakia
|
2d95eaa5bc
|
fix(bedrock_httpx.py): fix tool calling for anthropic bedrock calls w/ streaming
Fixes https://github.com/BerriAI/litellm/issues/4091
|
2024-06-10 14:20:25 -07:00 |
|
Ishaan Jaff
|
ef9349e6a2
|
Merge pull request #4086 from BerriAI/litellm_sdk_tool_calling_fic
[Fix] Litellm sdk - allow ChatCompletionMessageToolCall, and Function to be used as dict
|
2024-06-08 20:48:54 -07:00 |
|
Krish Dholakia
|
3be558c4bb
|
Merge pull request #4080 from BerriAI/litellm_predibase_exception_mapping
fix(utils.py): improved predibase exception mapping
|
2024-06-08 20:27:44 -07:00 |
|
Ishaan Jaff
|
af61eff8e3
|
feat - allow ChatCompletionMessageToolCall, and Function to be used as dict
|
2024-06-08 19:47:31 -07:00 |
|
Krrish Dholakia
|
0a886eed6a
|
fix(cost_calculator.py): fixes tgai unmapped model pricing
Fixes error where tgai helper function returned None. Enforces stronger type hints, refactors code, adds more unit testing.
|
2024-06-08 19:43:57 -07:00 |
|
Krrish Dholakia
|
39ee6be477
|
fix(utils.py): improved predibase exception mapping
adds unit testing + better coverage for predibase errors
|
2024-06-08 14:32:43 -07:00 |
|
Krrish Dholakia
|
192dfbcd63
|
fix(utils.py): fix helicone success logging integration
Fixes https://github.com/BerriAI/litellm/issues/4062
|
2024-06-08 08:59:56 -07:00 |
|
Ishaan Jaff
|
db0cc83ed5
|
fix - vertex ai exception mapping
|
2024-06-07 18:16:26 -07:00 |
|
Ishaan Jaff
|
8958dff9d0
|
fix vertex ai exceptions
|
2024-06-07 17:13:32 -07:00 |
|
Ishaan Jaff
|
92841dfe1b
|
Merge branch 'main' into litellm_security_fix
|
2024-06-07 16:52:25 -07:00 |
|
Krrish Dholakia
|
b16666b5dc
|
fix(utils.py): fix vertex ai exception mapping
|
2024-06-07 16:06:31 -07:00 |
|
Krrish Dholakia
|
de98bd939c
|
fix(test_custom_callbacks_input.py): unit tests for 'turn_off_message_logging'
ensure no raw request is logged either
|
2024-06-07 15:39:15 -07:00 |
|
Ishaan Jaff
|
80def35a04
|
Merge pull request #4065 from BerriAI/litellm_use_common_func
[Refactor] - Refactor proxy_server.py to use common function for `add_litellm_data_to_request`
|
2024-06-07 14:02:17 -07:00 |
|