mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
feat(router.py): Support Loadbalancing batch azure api endpoints (#5469)
* feat(router.py): initial commit for loadbalancing azure batch api endpoints Closes https://github.com/BerriAI/litellm/issues/5396 * fix(router.py): working `router.acreate_file()` * feat(router.py): working router.acreate_batch endpoint * feat(router.py): expose router.aretrieve_batch function Make it easy for user to retrieve the batch information * feat(router.py): support 'router.alist_batches' endpoint Adds support for getting all batches across all endpoints * feat(router.py): working loadbalancing on `/v1/files` * feat(proxy_server.py): working loadbalancing on `/v1/batches` * feat(proxy_server.py): working loadbalancing on Retrieve + List batch
This commit is contained in:
parent
7a22faaba4
commit
9f3fa29624
10 changed files with 667 additions and 37 deletions
|
@ -4645,6 +4645,8 @@ def get_llm_provider(
|
|||
For router -> Can also give the whole litellm param dict -> this function will extract the relevant details
|
||||
|
||||
Raises Error - if unable to map model to a provider
|
||||
|
||||
Return model, custom_llm_provider, dynamic_api_key, api_base
|
||||
"""
|
||||
try:
|
||||
## IF LITELLM PARAMS GIVEN ##
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue