Krrish Dholakia
|
8dbe0559dd
|
fix(utils.py): stream_options fix
|
2024-07-03 19:57:34 -07:00 |
|
Krrish Dholakia
|
344268e053
|
fix(anthropic.py): support *real* anthropic tool calling + streaming
Parses each chunk and translates to openai format
|
2024-07-03 19:48:35 -07:00 |
|
Krrish Dholakia
|
ed5fc3d1f9
|
fix(utils.py): fix vertex anthropic streaming
|
2024-07-03 18:43:46 -07:00 |
|
Krrish Dholakia
|
3d61a316cb
|
fix(router.py): bump azure default api version
Allows 'tool_choice' to be passed to azure
|
2024-07-03 12:00:00 -07:00 |
|
Krish Dholakia
|
d38f01e956
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-02 17:17:43 -07:00 |
|
Krrish Dholakia
|
589c1c6280
|
refactor(azure.py): replaces the custom transport logic for just using our httpx client
Done to fix all the http/https proxy issues people are facing with proxy.
|
2024-07-02 15:32:53 -07:00 |
|
Tiger Yu
|
26630cd263
|
Merge branch 'main' into litellm-fix-vertexaibeta
|
2024-07-02 09:49:44 -07:00 |
|
Ishaan Jaff
|
fcf65d5215
|
fix exception provider not known
|
2024-07-01 21:05:37 -07:00 |
|
Krrish Dholakia
|
4c95782f74
|
fix(utils.py): fix openrouter tool calling params
|
2024-07-01 08:25:29 -07:00 |
|
Krrish Dholakia
|
5e521bd36e
|
fix(utils.py): fix openrouter params
Fixes https://github.com/BerriAI/litellm/issues/4488
|
2024-07-01 08:24:06 -07:00 |
|
Indigo
|
d17816dd20
|
Fix usage of parameter-based credentials when using vertex_ai_beta route
|
2024-07-01 15:54:02 +10:00 |
|
Krish Dholakia
|
58d0330cd7
|
Merge pull request #4478 from BerriAI/litellm_support_response_schema_param_vertex_ai_old
feat(vertex_httpx.py): support the 'response_schema' param for older vertex ai models
|
2024-06-29 20:17:39 -07:00 |
|
Ishaan Jaff
|
46698ae01f
|
fix param mapping for bedrock claude
|
2024-06-29 19:33:51 -07:00 |
|
Krrish Dholakia
|
7670c5bd13
|
fix(utils.py): add 'enforce_validation' param
|
2024-06-29 19:12:00 -07:00 |
|
Krrish Dholakia
|
69d06cfbcd
|
fix(utils.py): fix passing additional param to post-call processing
|
2024-06-29 18:58:44 -07:00 |
|
Ishaan Jaff
|
19014dd931
|
Merge pull request #4477 from BerriAI/litellm_fix_exception_mapping
[Fix] - Error str in OpenAI, Azure exception
|
2024-06-29 17:37:26 -07:00 |
|
Krrish Dholakia
|
b699d9a8b9
|
fix(utils.py): support json schema validation
|
2024-06-29 15:05:52 -07:00 |
|
Krrish Dholakia
|
05dfc63b88
|
feat(vertex_httpx.py): support the 'response_schema' param for older vertex ai models - pass as prompt (user-controlled)
if 'response_schema' is not supported for vertex model (e.g. gemini-1.5-flash) pass in prompt
|
2024-06-29 13:25:27 -07:00 |
|
Ishaan Jaff
|
a6bc878a2a
|
fix - error str in OpenAI, Azure exception
|
2024-06-29 13:11:55 -07:00 |
|
Krrish Dholakia
|
5718d1e205
|
fix(utils.py): new helper function to check if provider/model supports 'response_schema' param
|
2024-06-29 12:40:29 -07:00 |
|
Krrish Dholakia
|
e73e9e12bc
|
fix(vertex_httpx.py): support passing response_schema to gemini
|
2024-06-29 11:33:19 -07:00 |
|
Krrish Dholakia
|
831745e710
|
test(test_streaming.py): try-except replicate api instability
|
2024-06-28 22:19:44 -07:00 |
|
Krrish Dholakia
|
ca04244a0a
|
fix(utils.py): correctly raise openrouter error
|
2024-06-28 21:50:21 -07:00 |
|
Krrish Dholakia
|
c151a1d244
|
fix(http_handler.py): raise more detailed http status errors
|
2024-06-28 15:12:38 -07:00 |
|
Tiger Yu
|
b0c1d235be
|
Include vertex_ai_beta in vertex_ai param mapping
|
2024-06-28 10:36:58 -07:00 |
|
Krish Dholakia
|
7b54c9d5bc
|
Merge pull request #4446 from BerriAI/litellm_get_max_modified_tokens
fix(token_counter.py): New `get_modified_max_tokens' helper func
|
2024-06-27 21:43:23 -07:00 |
|
Krish Dholakia
|
869275585a
|
Merge branch 'main' into litellm_response_cost_headers
|
2024-06-27 21:33:09 -07:00 |
|
Krrish Dholakia
|
d421486a45
|
fix(token_counter.py): New `get_modified_max_tokens' helper func
Fixes https://github.com/BerriAI/litellm/issues/4439
|
2024-06-27 15:38:09 -07:00 |
|
Krrish Dholakia
|
23a1f21f86
|
fix(utils.py): add new special token for cleanup
|
2024-06-26 22:52:50 -07:00 |
|
Krrish Dholakia
|
f533e1da09
|
fix(utils.py): return 'response_cost' in completion call
Closes https://github.com/BerriAI/litellm/issues/4335
|
2024-06-26 17:55:57 -07:00 |
|
Krrish Dholakia
|
98daedaf60
|
fix(router.py): fix setting httpx mounts
|
2024-06-26 17:22:04 -07:00 |
|
Ishaan Jaff
|
8ecf15185b
|
Merge pull request #4423 from BerriAI/litellm_forward_traceparent_otel
[Fix] Forward OTEL Traceparent Header to provider
|
2024-06-26 17:16:23 -07:00 |
|
Ishaan Jaff
|
d213f81b4c
|
add initial support for volcengine
|
2024-06-26 16:53:44 -07:00 |
|
Ishaan Jaff
|
57852bada9
|
fix handle_openai_chat_completion_chunk
|
2024-06-26 16:01:50 -07:00 |
|
Ishaan Jaff
|
b16b846711
|
forward otel traceparent in request headers
|
2024-06-26 12:31:28 -07:00 |
|
Ishaan Jaff
|
1cfe03c820
|
add fireworks ai param mapping
|
2024-06-26 06:43:18 -07:00 |
|
Krrish Dholakia
|
d98e00d1e0
|
fix(router.py): set cooldown_time: per model
|
2024-06-25 16:51:55 -07:00 |
|
Krrish Dholakia
|
e813e984f7
|
fix(predibase.py): support json schema on predibase
|
2024-06-25 16:03:47 -07:00 |
|
Krrish Dholakia
|
6889a4c0dd
|
fix(utils.py): predibase exception mapping - map 424 as a badrequest error
|
2024-06-25 13:47:38 -07:00 |
|
Krrish Dholakia
|
6e02ac0056
|
fix(utils.py): add coverage for anthropic content policy error - vertex ai
|
2024-06-25 11:47:39 -07:00 |
|
Ishaan Jaff
|
2bd993039b
|
Merge pull request #4405 from BerriAI/litellm_update_mock_completion
[Fix] - use `n` in mock completion responses
|
2024-06-25 11:20:30 -07:00 |
|
Ishaan Jaff
|
ccf1bbc5d7
|
fix using mock completion
|
2024-06-25 11:14:40 -07:00 |
|
Ishaan Jaff
|
07829514d1
|
feat - add param mapping for nvidia nim
|
2024-06-25 09:13:08 -07:00 |
|
Krrish Dholakia
|
d182ea0f77
|
fix(utils.py): catch 422-status errors
|
2024-06-24 19:41:48 -07:00 |
|
Krrish Dholakia
|
123477b55a
|
fix(utils.py): fix exception_mapping check for errors
If exception already mapped - don't attach traceback to it
|
2024-06-24 16:55:19 -07:00 |
|
Krrish Dholakia
|
cea630022e
|
fix(add-exception-mapping-+-langfuse-exception-logging-for-streaming-exceptions): add exception mapping + langfuse exception logging for streaming exceptions
Fixes https://github.com/BerriAI/litellm/issues/4338
|
2024-06-22 21:26:15 -07:00 |
|
Krish Dholakia
|
961e7ac95d
|
Merge branch 'main' into litellm_dynamic_tpm_limits
|
2024-06-22 19:14:59 -07:00 |
|
Krrish Dholakia
|
c4b1540ce0
|
fix(utils.py): support streamingchoices in 'get_response_string
|
2024-06-22 15:45:52 -07:00 |
|
Krrish Dholakia
|
532f24bfb7
|
refactor: instrument 'dynamic_rate_limiting' callback on proxy
|
2024-06-22 00:32:29 -07:00 |
|
Krrish Dholakia
|
5e893ed13e
|
fix(utils.py): Fix anthropic tool calling exception mapping
Fixes https://github.com/BerriAI/litellm/issues/4348
|
2024-06-21 21:20:49 -07:00 |
|