Roni Gurvich
|
32a085accf
|
gunicorn update
|
2024-05-05 13:25:59 +03:00 |
|
Krrish Dholakia
|
918367cc7b
|
test: skip hanging test
|
2024-05-05 00:27:38 -07:00 |
|
Krrish Dholakia
|
e95be13f10
|
fix(router.py): fix router retry policy logic
|
2024-05-04 23:02:50 -07:00 |
|
Krrish Dholakia
|
0529d7eaa3
|
bump: version 1.35.38 → 1.36.0
|
2024-05-04 22:23:20 -07:00 |
|
Krish Dholakia
|
6be20f5fc6
|
Merge pull request #3455 from BerriAI/litellm_assistants_support
feat(openai.py): add support for openai assistants
|
2024-05-04 22:21:44 -07:00 |
|
Ishaan Jaff
|
d4c2eb1797
|
fix post call rule test
|
2024-05-04 22:04:53 -07:00 |
|
Krrish Dholakia
|
06ae584473
|
fix(types/openai.py): fix python3.8 typing issue
|
2024-05-04 22:04:17 -07:00 |
|
Krrish Dholakia
|
66129bc921
|
fix(typing/openai.py): fix openai typing error (version-related)
|
2024-05-04 22:02:43 -07:00 |
|
Krrish Dholakia
|
2deac08ff1
|
fix(types/openai.py): fix typing import
|
2024-05-04 21:53:08 -07:00 |
|
Krrish Dholakia
|
1195bf296b
|
fix(openai.py): fix typing import for python 3.8
|
2024-05-04 21:49:30 -07:00 |
|
Krrish Dholakia
|
f2bf6411d8
|
fix(openai.py): fix linting error
|
2024-05-04 21:48:42 -07:00 |
|
Krish Dholakia
|
47078f4d84
|
Merge pull request #3452 from tothandras/fix/openmeter
Fix OpenMeter sync logger
|
2024-05-04 21:41:44 -07:00 |
|
Ishaan Jaff
|
4d1806bc95
|
fix - vertex ai exceptions
|
2024-05-04 21:32:10 -07:00 |
|
Krrish Dholakia
|
5406205e4b
|
test(test_assistants.py): cleanup tests
|
2024-05-04 21:31:07 -07:00 |
|
Krrish Dholakia
|
8fe6c9b401
|
feat(assistants/main.py): support litellm.get_assistants() and litellm.get_messages()
|
2024-05-04 21:30:28 -07:00 |
|
Krrish Dholakia
|
cad01fb586
|
feat(assistants/main.py): support 'litellm.get_threads'
|
2024-05-04 21:14:03 -07:00 |
|
Ishaan Jaff
|
157b2b3a06
|
fix python 3.8 install
|
2024-05-04 21:00:39 -07:00 |
|
Ishaan Jaff
|
4dcb4b81f1
|
fix - python 3.8 error
|
2024-05-04 20:44:40 -07:00 |
|
Ishaan Jaff
|
fc63c3f555
|
bump: version 1.35.38 → 1.35.39
|
2024-05-04 20:43:05 -07:00 |
|
Ishaan Jaff
|
713e04848d
|
Merge pull request #3460 from BerriAI/litellm_use_retry_policy_per_mg
[Feat] Set a Retry Policy per model group
|
2024-05-04 20:42:40 -07:00 |
|
Ishaan Jaff
|
ba065653ca
|
Merge pull request #3457 from BerriAI/litellm_return_num_retries_exceptions
[Feat] return num_retries in litellm.Router exceptions
|
2024-05-04 20:41:54 -07:00 |
|
Ishaan Jaff
|
f09da3f14c
|
test - test setting retry policies per model groups
|
2024-05-04 20:40:56 -07:00 |
|
Ishaan Jaff
|
90ac1e3fd9
|
feat - set retry policy per model group
|
2024-05-04 20:39:51 -07:00 |
|
Ishaan Jaff
|
95864a1d37
|
fix router debug logs
|
2024-05-04 20:24:15 -07:00 |
|
Krrish Dholakia
|
b7796c7487
|
feat(assistants/main.py): add 'add_message' endpoint
|
2024-05-04 19:56:11 -07:00 |
|
Krrish Dholakia
|
681a95e37b
|
fix(assistants/main.py): support litellm.create_thread() call
|
2024-05-04 19:35:37 -07:00 |
|
Ishaan Jaff
|
dfc22194b2
|
fix - undo local dev changes
|
2024-05-04 19:11:57 -07:00 |
|
Ishaan Jaff
|
6b59aeb603
|
fix return num retries in exceptions
|
2024-05-04 19:09:34 -07:00 |
|
Ishaan Jaff
|
5be8c95c6e
|
fix don't return num retries in utils.py
|
2024-05-04 19:07:28 -07:00 |
|
Ishaan Jaff
|
0f03e53348
|
feat return num retries in exceptions
|
2024-05-04 18:50:38 -07:00 |
|
Ishaan Jaff
|
87e165e413
|
Merge pull request #3456 from BerriAI/litellm_router_set_retry_policy_errors
[FEAT] router set custom num retries for ContentPolicyViolationErrorRetries, RateLimitErrorRetries, BadRequestErrorRetries etc
|
2024-05-04 18:26:03 -07:00 |
|
Ishaan Jaff
|
495d3a9646
|
router set dynamic retry policies
|
2024-05-04 18:13:43 -07:00 |
|
Ishaan Jaff
|
009f7c9bfc
|
support dynamic retry policies
|
2024-05-04 18:10:15 -07:00 |
|
Ishaan Jaff
|
f70ae68188
|
fix router test
|
2024-05-04 17:58:54 -07:00 |
|
Ishaan Jaff
|
bbf5d79069
|
docs - set retry policy
|
2024-05-04 17:52:01 -07:00 |
|
Ishaan Jaff
|
8d128a4b91
|
test - router retry policy
|
2024-05-04 17:30:30 -07:00 |
|
Krrish Dholakia
|
84c31a5528
|
feat(openai.py): add support for openai assistants
v0 commit. Closes https://github.com/BerriAI/litellm/issues/2842
|
2024-05-04 17:27:48 -07:00 |
|
Ishaan Jaff
|
b83901a861
|
Merge pull request #3451 from BerriAI/litellm_return_model_api_base
[Feat] Return model, api_base and first 100 chars of messages in Azure Exceptions
|
2024-05-04 17:07:09 -07:00 |
|
Ishaan Jaff
|
9e4e467039
|
test router - retry policy
|
2024-05-04 17:06:34 -07:00 |
|
Ishaan Jaff
|
5d17c814a3
|
router - use retry policy
|
2024-05-04 17:04:51 -07:00 |
|
Ishaan Jaff
|
6d1981fbaa
|
init router retry policy
|
2024-05-04 16:59:14 -07:00 |
|
Krish Dholakia
|
601e8a1172
|
Merge pull request #3448 from BerriAI/litellm_anthropic_fix
fix(factory.py): support 'function' openai message role for anthropic
|
2024-05-04 16:36:05 -07:00 |
|
Ishaan Jaff
|
85b2137f9c
|
fix - test exceptions vertex ai
|
2024-05-04 16:09:20 -07:00 |
|
Andras Toth
|
d99555075f
|
fix(integrations): OpenMeter sync logger
|
2024-05-04 22:47:20 +02:00 |
|
Ishaan Jaff
|
7150df653f
|
test azure exceptions are more decriptive
|
2024-05-04 13:02:29 -07:00 |
|
Ishaan Jaff
|
d45328dda6
|
Merge pull request #3450 from BerriAI/revert-3397-main
Revert "Add return_exceptions to litellm.batch_completion"
|
2024-05-04 13:01:25 -07:00 |
|
Ishaan Jaff
|
df8e33739d
|
Revert "Add return_exceptions to litellm.batch_completion"
|
2024-05-04 13:01:17 -07:00 |
|
Ishaan Jaff
|
d968dedd77
|
Merge pull request #1530 from TanaroSch/main
change max_tokens type to int
|
2024-05-04 12:47:15 -07:00 |
|
Krish Dholakia
|
a3905bad94
|
Merge pull request #3447 from BerriAI/litellm_fix_redis_ping
fix(caching.py): fix redis caching ping check
|
2024-05-04 12:46:31 -07:00 |
|
Krrish Dholakia
|
09d7121af2
|
fix(bedrock.py): map finish reason for bedrock
|
2024-05-04 12:45:40 -07:00 |
|