Litellm dev 12 28 2024 p2 (#7458)

* docs(sidebar.js): docs for support model access groups for wildcard routes

* feat(key_management_endpoints.py): add check if user is premium_user when adding model access group for wildcard route

* refactor(docs/): make control model access a root-level doc in proxy sidebar

easier to discover how to control model access on litellm

* docs: more cleanup

* feat(fireworks_ai/): add document inlining support

Enables user to call non-vision models with images/pdfs/etc.

* test(test_fireworks_ai_translation.py): add unit testing for fireworks ai transform inline helper util

* docs(docs/): add document inlining details to fireworks ai docs

* feat(fireworks_ai/): allow user to dynamically disable auto add transform inline

allows client-side disabling of this feature for proxy users

* feat(fireworks_ai/): return 'supports_vision' and 'supports_pdf_input' true on all fireworks ai models

now true as fireworks ai supports document inlining

* test: fix tests

* fix(router.py): add unit testing for _is_model_access_group_for_wildcard_route
This commit is contained in:
Krish Dholakia 2024-12-28 19:38:06 -08:00 committed by GitHub
parent 14324639a9
commit 9150722a00
19 changed files with 832 additions and 305 deletions

View file

@ -1240,3 +1240,15 @@ def test_token_counter_with_image_url_with_detail_high():
)
print("tokens", _tokens)
assert _tokens == DEFAULT_IMAGE_TOKEN_COUNT + 7
def test_fireworks_ai_document_inlining():
"""
With document inlining, all fireworks ai models are now:
- supports_pdf
- supports_vision
"""
from litellm.utils import supports_pdf_input, supports_vision
assert supports_pdf_input("fireworks_ai/llama-3.1-8b-instruct") is True
assert supports_vision("fireworks_ai/llama-3.1-8b-instruct") is True