mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
fix(utils.py): fix vertex ai optional param handling (#8477)
* fix(utils.py): fix vertex ai optional param handling don't pass max retries to unsupported route Fixes https://github.com/BerriAI/litellm/issues/8254 * fix(get_supported_openai_params.py): fix linting error * fix(get_supported_openai_params.py): default to openai-like spec * test: fix test * fix: fix linting error * Improved wildcard route handling on `/models` and `/model_group/info` (#8473) * fix(model_checks.py): update returning known model from wildcard to filter based on given model prefix ensures wildcard route - `vertex_ai/gemini-*` just returns known vertex_ai/gemini- models * test(test_proxy_utils.py): add unit testing for new 'get_known_models_from_wildcard' helper * test(test_models.py): add e2e testing for `/model_group/info` endpoint * feat(prometheus.py): support tracking total requests by user_email on prometheus adds initial support for tracking total requests by user_email * test(test_prometheus.py): add testing to ensure user email is always tracked * test: update testing for new prometheus metric * test(test_prometheus_unit_tests.py): add user email to total proxy metric * test: update tests * test: fix spend tests * test: fix test * fix(pagerduty.py): fix linting error * (Bug fix) - Using `include_usage` for /completions requests + unit testing (#8484) * pass stream options (#8419) * test_completion_streaming_usage_metrics * test_text_completion_include_usage --------- Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com> * fix naming docker stable release * build(model_prices_and_context_window.json): handle azure model update * docs(token_auth.md): clarify scopes can be a list or comma separated string * docs: fix docs * add sonar pricings (#8476) * add sonar pricings * Update model_prices_and_context_window.json * Update model_prices_and_context_window.json * Update model_prices_and_context_window_backup.json * update load testing script * fix test_async_router_context_window_fallback * pplx - fix supports tool choice openai param (#8496) * fix prom check startup (#8492) * test_async_router_context_window_fallback * ci(config.yml): mark daily docker builds with `-nightly` (#8499) Resolves https://github.com/BerriAI/litellm/discussions/8495 * (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442) * update RedisCluster creation * update RedisClusterCache * add redis ClusterCache * update async_set_cache_pipeline * cleanup redis cluster usage * fix redis pipeline * test_init_async_client_returns_same_instance * fix redis cluster * update mypy_path * fix init_redis_cluster * remove stub * test redis commit * ClusterPipeline * fix import * RedisCluster import * fix redis cluster * Potential fix for code scanning alert no. 2129: Clear-text logging of sensitive information Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * fix naming of redis cluster integration * test_redis_caching_ttl_pipeline * fix async_set_cache_pipeline --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Litellm UI stable version 02 12 2025 (#8497) * fix(key_management_endpoints.py): fix `/key/list` to include `return_full_object` as a top-level query param Allows user to specify they want the keys as a list of objects * refactor(key_list.tsx): initial refactor of key table in user dashboard offloads key filtering logic to backend api prevents common error of user not being able to see their keys * fix(key_management_endpoints.py): allow internal user to query `/key/list` to see their keys * fix(key_management_endpoints.py): add validation checks and filtering to `/key/list` endpoint allow internal user to see their keys. not anybody else's * fix(view_key_table.tsx): fix issue where internal user could not see default team keys * fix: fix linting error * fix: fix linting error * fix: fix linting error * fix: fix linting error * fix: fix linting error * fix: fix linting error * fix: fix linting error * test_supports_tool_choice * test_async_router_context_window_fallback * fix: fix test (#8501) * Litellm dev 02 12 2025 p1 (#8494) * Resolves https://github.com/BerriAI/litellm/issues/6625 (#8459) - enables no auth for SMTP Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com> * add sonar pricings (#8476) * add sonar pricings * Update model_prices_and_context_window.json * Update model_prices_and_context_window.json * Update model_prices_and_context_window_backup.json * test: fix test --------- Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com> Co-authored-by: Dani Regli <1daniregli@gmail.com> Co-authored-by: Lucca Zenóbio <luccazen@gmail.com> * test: fix test * UI Fixes p2 (#8502) * refactor(admin.tsx): cleanup add new admin flow removes buggy flow. Ensures just 1 simple way to add users / update roles. * fix(user_search_modal.tsx): ensure 'add member' button is always visible * fix(edit_membership.tsx): ensure 'save changes' button always visible * fix(internal_user_endpoints.py): ensure user in org can be deleted Fixes issue where user couldn't be deleted if they were a member of an org * fix: fix linting error * add phoenix docs for observability integration (#8522) * Add files via upload * Update arize_integration.md * Update arize_integration.md * add Phoenix docs * Added custom_attributes to additional_keys which can be sent to athina (#8518) * (UI) fix log details page (#8524) * rollback changes to view logs page * ui new build * add interface for prefetch * fix spread operation * fix max size for request view page * clean up table * ui fix column on request logs page * ui new build * Add UI Support for Admins to Call /cache/ping and View Cache Analytics (#8475) (#8519) * [Bug] UI: Newly created key does not display on the View Key Page (#8039) - Fixed issue where all keys appeared blank for admin users. - Implemented filtering of data via team settings to ensure all keys are displayed correctly. * Fix: - Updated the validator to allow model editing when `keyTeam.team_alias === "Default Team"`. - Ensured other teams still follow the original validation rules. * - added some classes in global.css - added text wrap in output of request,response and metadata in index.tsx - fixed styles of table in table.tsx * - added full payload when we open single log entry - added Combined Info Card in index.tsx * fix: keys not showing on refresh for internal user * merge * main merge * cache page * ca remove * terms change * fix:places caching inside exp --------- Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com> Co-authored-by: Lucca Zenóbio <luccazen@gmail.com> Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Co-authored-by: Dani Regli <1daniregli@gmail.com> Co-authored-by: exiao <exiao@users.noreply.github.com> Co-authored-by: vivek-athina <153479827+vivek-athina@users.noreply.github.com> Co-authored-by: Taha Ali <123803932+tahaali-dev@users.noreply.github.com>
This commit is contained in:
parent
805e832440
commit
8903bd1c7f
6 changed files with 68 additions and 44 deletions
|
@ -3166,51 +3166,56 @@ def get_optional_params( # noqa: PLR0915
|
|||
else False
|
||||
),
|
||||
)
|
||||
elif custom_llm_provider == "vertex_ai" and model in litellm.vertex_llama3_models:
|
||||
optional_params = litellm.VertexAILlama3Config().map_openai_params(
|
||||
non_default_params=non_default_params,
|
||||
optional_params=optional_params,
|
||||
model=model,
|
||||
drop_params=(
|
||||
drop_params
|
||||
if drop_params is not None and isinstance(drop_params, bool)
|
||||
else False
|
||||
),
|
||||
)
|
||||
elif custom_llm_provider == "vertex_ai" and model in litellm.vertex_mistral_models:
|
||||
if "codestral" in model:
|
||||
optional_params = litellm.CodestralTextCompletionConfig().map_openai_params(
|
||||
model=model,
|
||||
elif custom_llm_provider == "vertex_ai":
|
||||
|
||||
if model in litellm.vertex_mistral_models:
|
||||
if "codestral" in model:
|
||||
optional_params = (
|
||||
litellm.CodestralTextCompletionConfig().map_openai_params(
|
||||
model=model,
|
||||
non_default_params=non_default_params,
|
||||
optional_params=optional_params,
|
||||
drop_params=(
|
||||
drop_params
|
||||
if drop_params is not None and isinstance(drop_params, bool)
|
||||
else False
|
||||
),
|
||||
)
|
||||
)
|
||||
else:
|
||||
optional_params = litellm.MistralConfig().map_openai_params(
|
||||
model=model,
|
||||
non_default_params=non_default_params,
|
||||
optional_params=optional_params,
|
||||
drop_params=(
|
||||
drop_params
|
||||
if drop_params is not None and isinstance(drop_params, bool)
|
||||
else False
|
||||
),
|
||||
)
|
||||
elif model in litellm.vertex_ai_ai21_models:
|
||||
optional_params = litellm.VertexAIAi21Config().map_openai_params(
|
||||
non_default_params=non_default_params,
|
||||
optional_params=optional_params,
|
||||
model=model,
|
||||
drop_params=(
|
||||
drop_params
|
||||
if drop_params is not None and isinstance(drop_params, bool)
|
||||
else False
|
||||
),
|
||||
)
|
||||
else:
|
||||
optional_params = litellm.MistralConfig().map_openai_params(
|
||||
model=model,
|
||||
else: # use generic openai-like param mapping
|
||||
optional_params = litellm.VertexAILlama3Config().map_openai_params(
|
||||
non_default_params=non_default_params,
|
||||
optional_params=optional_params,
|
||||
model=model,
|
||||
drop_params=(
|
||||
drop_params
|
||||
if drop_params is not None and isinstance(drop_params, bool)
|
||||
else False
|
||||
),
|
||||
)
|
||||
elif custom_llm_provider == "vertex_ai" and model in litellm.vertex_ai_ai21_models:
|
||||
optional_params = litellm.VertexAIAi21Config().map_openai_params(
|
||||
non_default_params=non_default_params,
|
||||
optional_params=optional_params,
|
||||
model=model,
|
||||
drop_params=(
|
||||
drop_params
|
||||
if drop_params is not None and isinstance(drop_params, bool)
|
||||
else False
|
||||
),
|
||||
)
|
||||
|
||||
elif custom_llm_provider == "sagemaker":
|
||||
# temperature, top_p, n, stream, stop, max_tokens, n, presence_penalty default to None
|
||||
optional_params = litellm.SagemakerConfig().map_openai_params(
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue