Krish Dholakia
|
f3fa2160a0
|
LiteLLM Minor Fixes & Improvements (09/21/2024) (#5819)
* fix(router.py): fix error message
* Litellm disable keys (#5814)
* build(schema.prisma): allow blocking/unblocking keys
Fixes https://github.com/BerriAI/litellm/issues/5328
* fix(key_management_endpoints.py): fix pop
* feat(auth_checks.py): allow admin to enable/disable virtual keys
Closes https://github.com/BerriAI/litellm/issues/5328
* docs(vertex.md): add auth section for vertex ai
Addresses - https://github.com/BerriAI/litellm/issues/5768#issuecomment-2365284223
* build(model_prices_and_context_window.json): show which models support prompt_caching
Closes https://github.com/BerriAI/litellm/issues/5776
* fix(router.py): allow setting default priority for requests
* fix(router.py): add 'retry-after' header for concurrent request limit errors
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(router.py): correctly raise and use retry-after header from azure+openai
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(user_api_key_auth.py): fix valid token being none
* fix(auth_checks.py): fix model dump for cache management object
* fix(user_api_key_auth.py): pass prisma_client to obj
* test(test_otel.py): update test for new key check
* test: fix test
|
2024-09-21 18:51:53 -07:00 |
|
Krrish Dholakia
|
4e5d5354c2
|
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
|
2024-08-10 14:15:12 -07:00 |
|
Ishaan Jaff
|
f20a8ff364
|
test get mode info for gemini/gemini-1.5-flash
|
2024-07-11 13:04:18 -07:00 |
|
Krrish Dholakia
|
b580e0992d
|
fix(utils.py): check if model info is for model with correct provider
Fixes issue where incorrect pricing was used for custom llm provider
|
2024-06-13 15:54:24 -07:00 |
|
Krrish Dholakia
|
2a2b97f093
|
fix(utils.py): accept {custom_llm_provider}/{model_name} in get_model_info
fixes https://github.com/BerriAI/litellm/issues/3100
|
2024-04-17 16:38:53 -07:00 |
|