mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* fix(create_user_button.tsx): allow admin to set models user has access to, on invite Enables controlling model access on invite * feat(auth_checks.py): enforce 'no-model-access' special model name on backend prevent user from calling models if default key has no model access * fix(chat_ui.tsx): allow user to input custom model * fix(chat_ui.tsx): pull available models based on models key has access to * style(create_user_button.tsx): move default model inside 'personal key creation' accordion * fix(chat_ui.tsx): fix linting error * test(test_auth_checks.py): add unit-test for special model name * docs(internal_user_endpoints.py): update docstring * fix test_moderations_bad_model * Litellm dev 02 27 2025 p6 (#8891) * fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads * fix(sagemaker/handler.py): support passing model id on async streaming * fix(litellm_pre_call_utils.py): Fixes https://github.com/BerriAI/litellm/issues/7237 * Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) * fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route move to using anthropic config as base * fix(utils.py): expose anthropic config via providerconfigmanager * fix(llm_http_handler.py): support json mode on async completion calls * fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke * fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config Prevents error when passing in 'response_format: {"type": "text"} * test: fix test * fix(utils.py): fix base invoke provider check * fix(anthropic_claude3_transformation.py): don't pass 'stream' param * fix: fix linting errors * fix(converse_transformation.py): handle response_format type=text for converse * converse_transformation: pass 'description' if set in response_format (#8907) * test(test_bedrock_completion.py): e2e test ensuring tool description is passed in * fix(converse_transformation.py): pass description, if set * fix(transformation.py): Fixes https://github.com/BerriAI/litellm/issues/8767#issuecomment-2689887663 * Fix bedrock passing `response_format: {"type": "text"}` (#8900) * fix(converse_transformation.py): ignore type: text, value in response_format no-op for bedrock * fix(converse_transformation.py): handle adding response format value to tools * fix(base_invoke_transformation.py): fix 'get_bedrock_invoke_provider' to handle cross-region-inferencing models * test(test_bedrock_completion.py): add unit testing for bedrock invoke provider logic * test: update test * fix(exception_mapping_utils.py): add context window exceeded error handling for databricks provider route * fix(fireworks_ai/): support passing tools + response_format together * fix: cleanup * fix(base_invoke_transformation.py): fix imports * (Feat) - Show Error Logs on LiteLLM UI (#8904) * fix test_moderations_bad_model * use async_post_call_failure_hook * basic logging errors in DB * show status on ui * show status on ui * ui show request / response side by side * stash fixes * working, track raw request * track error info in metadata * fix showing error / request / response logs * show traceback on error viewer * ui with traceback of error * fix async_post_call_failure_hook * fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads * test_get_error_information * fix code quality * rename proxy track cost callback test * _should_store_errors_in_spend_logs * feature flag error logs * Revert "_should_store_errors_in_spend_logs" This reverts commit |
||
---|---|---|
.. | ||
example_config_yaml | ||
test_configs | ||
test_model_response_typing | ||
adroit-crow-413218-bc47f303efc9.json | ||
azure_fine_tune.jsonl | ||
batch_job_results_furniture.jsonl | ||
conftest copy.py | ||
conftest.py | ||
data_map.txt | ||
eagle.wav | ||
gettysburg.wav | ||
large_text.py | ||
messages_with_counts.py | ||
model_cost.json | ||
openai_batch_completions.jsonl | ||
openai_batch_completions_router.jsonl | ||
speech_vertex.mp3 | ||
test_aproxy_startup.py | ||
test_audit_logs_proxy.py | ||
test_auth_checks.py | ||
test_banned_keyword_list.py | ||
test_custom_callback_input.py | ||
test_db_schema_changes.py | ||
test_deployed_proxy_keygen.py | ||
test_jwt.py | ||
test_key_generate_dynamodb.py | ||
test_key_generate_prisma.py | ||
test_proxy_config_unit_test.py | ||
test_proxy_custom_auth.py | ||
test_proxy_custom_logger.py | ||
test_proxy_encrypt_decrypt.py | ||
test_proxy_exception_mapping.py | ||
test_proxy_gunicorn.py | ||
test_proxy_pass_user_config.py | ||
test_proxy_reject_logging.py | ||
test_proxy_routes.py | ||
test_proxy_server.py | ||
test_proxy_server_caching.py | ||
test_proxy_server_keys.py | ||
test_proxy_server_langfuse.py | ||
test_proxy_server_spend.py | ||
test_proxy_setting_guardrails.py | ||
test_proxy_token_counter.py | ||
test_proxy_utils.py | ||
test_unit_test_max_model_budget_limiter.py | ||
test_unit_test_proxy_hooks.py | ||
test_update_spend.py | ||
test_user_api_key_auth.py | ||
vertex_key.json |