Ishaan Jaff
|
7ce2aa83c1
|
feat - set custom routing strategy
|
2024-06-20 13:49:44 -07:00 |
|
Krrish Dholakia
|
477ed99896
|
feat(router.py): allow user to call specific deployment via id
Allows easier health checks for specific deployments by just passing in model id
|
2024-06-19 13:02:46 -07:00 |
|
Krrish Dholakia
|
121f4d8a1b
|
fix(router.py): support multiple orgs in 1 model definition
Closes https://github.com/BerriAI/litellm/issues/3949
|
2024-06-18 19:36:58 -07:00 |
|
Krrish Dholakia
|
cc1ec55e5b
|
fix(vertex_httpx.py): check if model supports system messages before sending separately
|
2024-06-17 17:32:38 -07:00 |
|
Krish Dholakia
|
63780e1ccf
|
Merge pull request #4207 from BerriAI/litellm_content_policy_fallbacks
feat(router.py): support content policy fallbacks
|
2024-06-14 18:55:11 -07:00 |
|
Krrish Dholakia
|
734bd5ef85
|
feat(router.py): support content policy fallbacks
Closes https://github.com/BerriAI/litellm/issues/2632
|
2024-06-14 17:15:44 -07:00 |
|
Ishaan Jaff
|
5f7423047f
|
feat - send llm exception alert on acompletion, aembedding etc
|
2024-06-14 10:11:24 -07:00 |
|
Ishaan Jaff
|
bd341c69b5
|
fix - send alert on router level exceptions
|
2024-06-14 08:41:12 -07:00 |
|
Ishaan Jaff
|
b4274bc852
|
fix model hub not loading
|
2024-06-12 19:38:31 -07:00 |
|
Ishaan Jaff
|
5e411a45d5
|
fix azure falbacks test
|
2024-06-10 21:50:54 -07:00 |
|
Ishaan Jaff
|
94210a86b4
|
test - client side fallbacks
|
2024-06-10 15:00:36 -07:00 |
|
Ishaan Jaff
|
878fa676d7
|
fix - support fallbacks as list
|
2024-06-10 14:32:28 -07:00 |
|
Krrish Dholakia
|
58cce8a922
|
fix(types/router.py): modelgroupinfo to handle mode being None and supported_openai_params not being a list
|
2024-06-08 20:13:45 -07:00 |
|
Krish Dholakia
|
9d81f1cc52
|
Merge pull request #4049 from BerriAI/litellm_cleanup_traceback
refactor: replace 'traceback.print_exc()' with logging library
|
2024-06-07 08:03:22 -07:00 |
|
Krish Dholakia
|
0b3165e5ee
|
Merge pull request #4046 from BerriAI/litellm_router_order
feat(router.py): enable settting 'order' for a deployment in model list
|
2024-06-06 16:37:03 -07:00 |
|
Krish Dholakia
|
ea4334f760
|
Merge branch 'main' into litellm_cleanup_traceback
|
2024-06-06 16:32:08 -07:00 |
|
Krrish Dholakia
|
43991afc34
|
feat(scheduler.py): support redis caching for req. prioritization
enables req. prioritization to work across multiple instances of litellm
|
2024-06-06 14:19:21 -07:00 |
|
Krrish Dholakia
|
e391e30285
|
refactor: replace 'traceback.print_exc()' with logging library
allows error logs to be in json format for otel logging
|
2024-06-06 13:47:43 -07:00 |
|
Krrish Dholakia
|
005128addc
|
feat(router.py): enable settting 'order' for a deployment in model list
Allows user to control which model gets called first in model group
|
2024-06-06 09:46:51 -07:00 |
|
Krrish Dholakia
|
20cb525a5c
|
feat(assistants/main.py): add assistants api streaming support
|
2024-06-04 16:30:35 -07:00 |
|
Krish Dholakia
|
73ae4860c0
|
Merge pull request #3992 from BerriAI/litellm_router_default_request_timeout
fix(router.py): use `litellm.request_timeout` as default for router clients
|
2024-06-03 21:37:38 -07:00 |
|
Krish Dholakia
|
127d1457de
|
Merge pull request #3996 from BerriAI/litellm_azure_assistants_api_support
feat(assistants/main.py): Azure Assistants API support
|
2024-06-03 21:05:03 -07:00 |
|
Krrish Dholakia
|
a2ba63955a
|
feat(assistants/main.py): Closes https://github.com/BerriAI/litellm/issues/3993
|
2024-06-03 18:47:05 -07:00 |
|
Krrish Dholakia
|
ae52e7559e
|
fix(router.py): use litellm.request_timeout as default for router clients
|
2024-06-03 14:19:53 -07:00 |
|
Krrish Dholakia
|
96120ab2c5
|
fix(router.py): fix should_retry logic for authentication errors
|
2024-06-03 13:12:00 -07:00 |
|
Ishaan Jaff
|
0acb6e5180
|
ci/cd run again
|
2024-06-01 21:19:32 -07:00 |
|
Ishaan Jaff
|
2d1aaf5cf7
|
fix test_rate_limit[usage-based-routing-True-3-2]
|
2024-06-01 21:18:23 -07:00 |
|
Ishaan Jaff
|
ad920be3bf
|
fix async_function_with_retries
|
2024-06-01 19:00:22 -07:00 |
|
Ishaan Jaff
|
e149ca73f6
|
Merge pull request #3963 from BerriAI/litellm_set_allowed_fail_policy
[FEAT]- set custom AllowedFailsPolicy on litellm.Router
|
2024-06-01 17:57:11 -07:00 |
|
Ishaan Jaff
|
dd25d83087
|
Merge pull request #3962 from BerriAI/litellm_return_num_rets_max_exceptions
[Feat] return `num_retries` and `max_retries` in exceptions
|
2024-06-01 17:48:38 -07:00 |
|
Ishaan Jaff
|
728fead32c
|
fix current_attempt, num_retries not defined
|
2024-06-01 17:42:37 -07:00 |
|
Ishaan Jaff
|
a11175c05b
|
feat - set custom AllowedFailsPolicy
|
2024-06-01 17:26:21 -07:00 |
|
Ishaan Jaff
|
a485b19215
|
fix - return in LITELLM_EXCEPTION_TYPES
|
2024-06-01 17:05:33 -07:00 |
|
Ishaan Jaff
|
2341d99bdc
|
feat - add num retries and max retries in exception
|
2024-06-01 16:53:00 -07:00 |
|
Krrish Dholakia
|
4ffbd80584
|
fix(router.py): simplify scheduler
move the scheduler poll queuing logic into the router class, making it easier to use
|
2024-06-01 16:09:57 -07:00 |
|
Krish Dholakia
|
1529f665cc
|
Merge pull request #3954 from BerriAI/litellm_simple_request_prioritization
feat(scheduler.py): add request prioritization scheduler
|
2024-05-31 23:29:09 -07:00 |
|
Krrish Dholakia
|
9a3789ce69
|
fix(router.py): fix param
|
2024-05-31 21:52:23 -07:00 |
|
Krrish Dholakia
|
6221fabecf
|
fix(router.py): fix cooldown logic for usage-based-routing-v2 pre-call-checks
|
2024-05-31 21:32:01 -07:00 |
|
Krish Dholakia
|
c049b6b4af
|
Merge pull request #3936 from BerriAI/litellm_assistants_api_proxy
feat(proxy_server.py): add assistants api endpoints to proxy server
|
2024-05-31 18:43:22 -07:00 |
|
Ishaan Jaff
|
f6617c94e3
|
fix - model hub supported_openai_params
|
2024-05-31 07:27:21 -07:00 |
|
Krrish Dholakia
|
2fdf4a7bb4
|
feat(proxy_server.py): add assistants api endpoints to proxy server
|
2024-05-30 22:44:43 -07:00 |
|
Krish Dholakia
|
73e3dba2f6
|
Merge pull request #3928 from BerriAI/litellm_audio_speech_endpoint
feat(main.py): support openai tts endpoint
|
2024-05-30 17:30:42 -07:00 |
|
Krrish Dholakia
|
eb159b64e1
|
fix(openai.py): fix openai response for /audio/speech endpoint
|
2024-05-30 16:41:06 -07:00 |
|
Krrish Dholakia
|
66e08cac9b
|
fix(router.py): cooldown on 404 errors
https://github.com/BerriAI/litellm/issues/3884
|
2024-05-30 10:57:38 -07:00 |
|
Krrish Dholakia
|
482929bece
|
fix(router.py): security fix - don't show api key in invalid model setup error message
|
2024-05-29 16:14:57 -07:00 |
|
Krish Dholakia
|
4fd3994b4e
|
Merge branch 'main' into litellm_batch_completions
|
2024-05-28 22:38:05 -07:00 |
|
Ishaan Jaff
|
17c6ea2272
|
fix - update abatch_completion docstring
|
2024-05-28 22:27:09 -07:00 |
|
Ishaan Jaff
|
aca5118a83
|
feat - router add abatch_completion
|
2024-05-28 22:19:33 -07:00 |
|
Krrish Dholakia
|
98ebcad52d
|
fix(router.py): support batch completions fastest response streaming
|
2024-05-28 21:51:09 -07:00 |
|
Krrish Dholakia
|
012bde0b07
|
fix(router.py): support comma-separated model list for batch completion fastest response
|
2024-05-28 21:34:37 -07:00 |
|