Ishaan Jaff
|
19014dd931
|
Merge pull request #4477 from BerriAI/litellm_fix_exception_mapping
[Fix] - Error str in OpenAI, Azure exception
|
2024-06-29 17:37:26 -07:00 |
|
Krrish Dholakia
|
b699d9a8b9
|
fix(utils.py): support json schema validation
|
2024-06-29 15:05:52 -07:00 |
|
Krrish Dholakia
|
05dfc63b88
|
feat(vertex_httpx.py): support the 'response_schema' param for older vertex ai models - pass as prompt (user-controlled)
if 'response_schema' is not supported for vertex model (e.g. gemini-1.5-flash) pass in prompt
|
2024-06-29 13:25:27 -07:00 |
|
Ishaan Jaff
|
a6bc878a2a
|
fix - error str in OpenAI, Azure exception
|
2024-06-29 13:11:55 -07:00 |
|
Krrish Dholakia
|
5718d1e205
|
fix(utils.py): new helper function to check if provider/model supports 'response_schema' param
|
2024-06-29 12:40:29 -07:00 |
|
Krrish Dholakia
|
e73e9e12bc
|
fix(vertex_httpx.py): support passing response_schema to gemini
|
2024-06-29 11:33:19 -07:00 |
|
Krrish Dholakia
|
831745e710
|
test(test_streaming.py): try-except replicate api instability
|
2024-06-28 22:19:44 -07:00 |
|
Krrish Dholakia
|
ca04244a0a
|
fix(utils.py): correctly raise openrouter error
|
2024-06-28 21:50:21 -07:00 |
|
Krrish Dholakia
|
c151a1d244
|
fix(http_handler.py): raise more detailed http status errors
|
2024-06-28 15:12:38 -07:00 |
|
Tiger Yu
|
b0c1d235be
|
Include vertex_ai_beta in vertex_ai param mapping
|
2024-06-28 10:36:58 -07:00 |
|
Krish Dholakia
|
7b54c9d5bc
|
Merge pull request #4446 from BerriAI/litellm_get_max_modified_tokens
fix(token_counter.py): New `get_modified_max_tokens' helper func
|
2024-06-27 21:43:23 -07:00 |
|
Krish Dholakia
|
869275585a
|
Merge branch 'main' into litellm_response_cost_headers
|
2024-06-27 21:33:09 -07:00 |
|
Krrish Dholakia
|
d421486a45
|
fix(token_counter.py): New `get_modified_max_tokens' helper func
Fixes https://github.com/BerriAI/litellm/issues/4439
|
2024-06-27 15:38:09 -07:00 |
|
Krrish Dholakia
|
23a1f21f86
|
fix(utils.py): add new special token for cleanup
|
2024-06-26 22:52:50 -07:00 |
|
Krrish Dholakia
|
f533e1da09
|
fix(utils.py): return 'response_cost' in completion call
Closes https://github.com/BerriAI/litellm/issues/4335
|
2024-06-26 17:55:57 -07:00 |
|
Krrish Dholakia
|
98daedaf60
|
fix(router.py): fix setting httpx mounts
|
2024-06-26 17:22:04 -07:00 |
|
Ishaan Jaff
|
8ecf15185b
|
Merge pull request #4423 from BerriAI/litellm_forward_traceparent_otel
[Fix] Forward OTEL Traceparent Header to provider
|
2024-06-26 17:16:23 -07:00 |
|
Ishaan Jaff
|
d213f81b4c
|
add initial support for volcengine
|
2024-06-26 16:53:44 -07:00 |
|
Ishaan Jaff
|
57852bada9
|
fix handle_openai_chat_completion_chunk
|
2024-06-26 16:01:50 -07:00 |
|
Ishaan Jaff
|
b16b846711
|
forward otel traceparent in request headers
|
2024-06-26 12:31:28 -07:00 |
|
Ishaan Jaff
|
1cfe03c820
|
add fireworks ai param mapping
|
2024-06-26 06:43:18 -07:00 |
|
Krrish Dholakia
|
d98e00d1e0
|
fix(router.py): set cooldown_time: per model
|
2024-06-25 16:51:55 -07:00 |
|
Krrish Dholakia
|
e813e984f7
|
fix(predibase.py): support json schema on predibase
|
2024-06-25 16:03:47 -07:00 |
|
Krrish Dholakia
|
6889a4c0dd
|
fix(utils.py): predibase exception mapping - map 424 as a badrequest error
|
2024-06-25 13:47:38 -07:00 |
|
Krrish Dholakia
|
6e02ac0056
|
fix(utils.py): add coverage for anthropic content policy error - vertex ai
|
2024-06-25 11:47:39 -07:00 |
|
Ishaan Jaff
|
2bd993039b
|
Merge pull request #4405 from BerriAI/litellm_update_mock_completion
[Fix] - use `n` in mock completion responses
|
2024-06-25 11:20:30 -07:00 |
|
Ishaan Jaff
|
ccf1bbc5d7
|
fix using mock completion
|
2024-06-25 11:14:40 -07:00 |
|
Ishaan Jaff
|
07829514d1
|
feat - add param mapping for nvidia nim
|
2024-06-25 09:13:08 -07:00 |
|
Krrish Dholakia
|
d182ea0f77
|
fix(utils.py): catch 422-status errors
|
2024-06-24 19:41:48 -07:00 |
|
Krrish Dholakia
|
123477b55a
|
fix(utils.py): fix exception_mapping check for errors
If exception already mapped - don't attach traceback to it
|
2024-06-24 16:55:19 -07:00 |
|
Krrish Dholakia
|
cea630022e
|
fix(add-exception-mapping-+-langfuse-exception-logging-for-streaming-exceptions): add exception mapping + langfuse exception logging for streaming exceptions
Fixes https://github.com/BerriAI/litellm/issues/4338
|
2024-06-22 21:26:15 -07:00 |
|
Krish Dholakia
|
961e7ac95d
|
Merge branch 'main' into litellm_dynamic_tpm_limits
|
2024-06-22 19:14:59 -07:00 |
|
Krrish Dholakia
|
c4b1540ce0
|
fix(utils.py): support streamingchoices in 'get_response_string
|
2024-06-22 15:45:52 -07:00 |
|
Krrish Dholakia
|
532f24bfb7
|
refactor: instrument 'dynamic_rate_limiting' callback on proxy
|
2024-06-22 00:32:29 -07:00 |
|
Krrish Dholakia
|
5e893ed13e
|
fix(utils.py): Fix anthropic tool calling exception mapping
Fixes https://github.com/BerriAI/litellm/issues/4348
|
2024-06-21 21:20:49 -07:00 |
|
Krrish Dholakia
|
000d678445
|
fix(utils.py): improve coverage for anthropic exception mapping
|
2024-06-21 21:15:10 -07:00 |
|
Ishaan Jaff
|
4e795dad81
|
fix ContentPolicyViolationError mapping
|
2024-06-21 16:11:42 -07:00 |
|
Krrish Dholakia
|
16941eee43
|
fix(utils.py): re-integrate separate gemini optional param mapping (google ai studio)
Fixes https://github.com/BerriAI/litellm/issues/4333
|
2024-06-21 09:01:32 -07:00 |
|
Krrish Dholakia
|
fdb7101aaf
|
fix(utils.py): add extra body params for text completion calls
|
2024-06-21 08:28:38 -07:00 |
|
Wonseok Lee (Jack)
|
c4c7d1b367
|
Merge branch 'main' into feat/friendliai
|
2024-06-21 10:50:03 +09:00 |
|
Krish Dholakia
|
c8a40eca05
|
Merge pull request #4313 from BerriAI/litellm_drop_specific_params
fix(utils.py): allow dropping specific openai params
|
2024-06-20 15:15:19 -07:00 |
|
Krrish Dholakia
|
5cb0c1ad70
|
fix(utils.py): add new error string to context window exception mapping
|
2024-06-20 12:01:11 -07:00 |
|
Krrish Dholakia
|
a0f08e0dad
|
fix(utils.py): allow dropping specific openai params
|
2024-06-20 11:48:06 -07:00 |
|
Krrish Dholakia
|
d6eb986b8e
|
fix(utils.py): add together ai exception mapping
|
2024-06-20 11:29:57 -07:00 |
|
Krish Dholakia
|
71716bec48
|
Merge pull request #4295 from BerriAI/litellm_gemini_pricing_2
Vertex AI - character based cost calculation
|
2024-06-19 19:17:09 -07:00 |
|
Krrish Dholakia
|
682ec33aa0
|
fix(litellm_logging.py): initialize global variables
Fixes https://github.com/BerriAI/litellm/issues/4281
|
2024-06-19 18:39:45 -07:00 |
|
Krrish Dholakia
|
16da21e839
|
feat(llm_cost_calc/google.py): do character based cost calculation for vertex ai
Calculate cost for vertex ai responses using characters in query/response
Closes https://github.com/BerriAI/litellm/issues/4165
|
2024-06-19 17:18:42 -07:00 |
|
Ishaan Jaff
|
3c6785c207
|
add ft:gpt4
|
2024-06-19 16:49:45 -07:00 |
|
Ishaan Jaff
|
8c07f3edf4
|
fix remove get_or_generate_uuid
|
2024-06-19 14:10:35 -07:00 |
|
Ishaan Jaff
|
f5ebc1a042
|
Merge pull request #4282 from BerriAI/litellm_add_openrouter_exception_mapping
feat - add open router exception mapping
|
2024-06-19 12:25:12 -07:00 |
|