forked from phoenix/litellm-mirror
Litellm lm studio embedding params (#6746)
* fix(ollama.py): fix get model info request Fixes https://github.com/BerriAI/litellm/issues/6703 * feat(anthropic/chat/transformation.py): support passing user id to anthropic via openai 'user' param * docs(anthropic.md): document all supported openai params for anthropic * test: fix tests * fix: fix tests * feat(jina_ai/): add rerank support Closes https://github.com/BerriAI/litellm/issues/6691 * test: handle service unavailable error * fix(handler.py): refactor together ai rerank call * test: update test to handle overloaded error * test: fix test * Litellm router trace (#6742) * feat(router.py): add trace_id to parent functions - allows tracking retry/fallbacks * feat(router.py): log trace id across retry/fallback logic allows grouping llm logs for the same request * test: fix tests * fix: fix test * fix(transformation.py): only set non-none stop_sequences * Litellm router disable fallbacks (#6743) * bump: version 1.52.6 → 1.52.7 * feat(router.py): enable dynamically disabling fallbacks Allows for enabling/disabling fallbacks per key * feat(litellm_pre_call_utils.py): support setting 'disable_fallbacks' on litellm key * test: fix test * fix(exception_mapping_utils.py): map 'model is overloaded' to internal server error * fix(lm_studio/embed): support translating lm studio optional params ' * feat(auth_checks.py): fix auth check inside route - `/team/list` Fixes regression where non-admin w/ user_id=None able to query all teams * docs proxy_budget_rescheduler_min_time * helm run DISABLE_SCHEMA_UPDATE * docs helm pre sync hook * fix migration job.yaml * fix DATABASE_URL * use existing spec for migrations job * fix yaml on migrations job * fix migration job * update doc on pre sync hook * fix migrations-job.yaml * fix migration job * fix prisma migration * test - handle eol model claude-2, use claude-2.1 instead * (docs) add instructions on how to contribute to docker image * Update code blocks huggingface.md (#6737) * Update prefix.md (#6734) * fix test_supports_response_schema * mark Helm PreSyn as BETA * (Feat) Add support for storing virtual keys in AWS SecretManager (#6728) * add SecretManager to httpxSpecialProvider * fix importing AWSSecretsManagerV2 * add unit testing for writing keys to AWS secret manager * use KeyManagementEventHooks for key/generated events * us event hooks for key management endpoints * working AWSSecretsManagerV2 * fix write secret to AWS secret manager on /key/generate * fix KeyManagementSettings * use tasks for key management hooks * add async_delete_secret * add test for async_delete_secret * use _delete_virtual_keys_from_secret_manager * fix test secret manager * test_key_generate_with_secret_manager_call * fix check for key_management_settings * sync_read_secret * test_aws_secret_manager * fix sync_read_secret * use helper to check when _should_read_secret_from_secret_manager * test_get_secret_with_access_mode * test - handle eol model claude-2, use claude-2.1 instead * docs AWS secret manager * fix test_read_nonexistent_secret * fix test_supports_response_schema * ci/cd run again * LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) * fix(ollama.py): fix get model info request Fixes https://github.com/BerriAI/litellm/issues/6703 * feat(anthropic/chat/transformation.py): support passing user id to anthropic via openai 'user' param * docs(anthropic.md): document all supported openai params for anthropic * test: fix tests * fix: fix tests * feat(jina_ai/): add rerank support Closes https://github.com/BerriAI/litellm/issues/6691 * test: handle service unavailable error * fix(handler.py): refactor together ai rerank call * test: update test to handle overloaded error * test: fix test * Litellm router trace (#6742) * feat(router.py): add trace_id to parent functions - allows tracking retry/fallbacks * feat(router.py): log trace id across retry/fallback logic allows grouping llm logs for the same request * test: fix tests * fix: fix test * fix(transformation.py): only set non-none stop_sequences * Litellm router disable fallbacks (#6743) * bump: version 1.52.6 → 1.52.7 * feat(router.py): enable dynamically disabling fallbacks Allows for enabling/disabling fallbacks per key * feat(litellm_pre_call_utils.py): support setting 'disable_fallbacks' on litellm key * test: fix test * fix(exception_mapping_utils.py): map 'model is overloaded' to internal server error * test: handle gemini error * test: fix test * fix: new run * bump: version 1.52.7 → 1.52.8 * docs: add docs on jina ai rerank support * docs(reliability.md): add tutorial on disabling fallbacks per key * docs(logging.md): add 'trace_id' param to standard logging payload * (feat) add bedrock/stability.stable-image-ultra-v1:0 (#6723) * add stability.stable-image-ultra-v1:0 * add pricing for stability.stable-image-ultra-v1:0 * fix test_supports_response_schema * ci/cd run again * [Feature]: Stop swallowing up AzureOpenAi exception responses in litellm's implementation for a BadRequestError (#6745) * fix azure exceptions * test_bad_request_error_contains_httpx_response * test_bad_request_error_contains_httpx_response * use safe access to get exception response * fix get attr * [Feature]: json_schema in response support for Anthropic (#6748) * _convert_tool_response_to_message * fix ModelResponseIterator * fix test_json_response_format * test_json_response_format_stream * fix _convert_tool_response_to_message * use helper _handle_json_mode_chunk * fix _process_response * unit testing for test_convert_tool_response_to_message_no_arguments * update doc for JSON mode * fix: import audio check (#6740) * fix imagegeneration output_cost_per_image on model cost map (#6752) * (feat) Vertex AI - add support for fine tuned embedding models (#6749) * fix use fine tuned vertex embedding models * test_vertex_embedding_url * add _transform_openai_request_to_fine_tuned_embedding_request * add _transform_openai_request_to_fine_tuned_embedding_request * add transform_openai_request_to_vertex_embedding_request * add _transform_vertex_response_to_openai_for_fine_tuned_models * test_vertexai_embedding for ft models * fix test_vertexai_embedding_finetuned * doc fine tuned / custom embedding models * fix test test_partner_models_httpx * bump: version 1.52.8 → 1.52.9 * LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) * fix(utils.py): add logprobs support for together ai Fixes https://github.com/BerriAI/litellm/issues/6724 * feat(pass_through_endpoints/): add anthropic/ pass-through endpoint adds new `anthropic/` pass-through endpoint + refactors docs * feat(spend_management_endpoints.py): allow /global/spend/report to query team + customer id enables seeing spend for a customer in a team * Add integration with MLflow Tracing (#6147) * Add MLflow logger Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * Streaming handling Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * lint Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * address comments and fix issues Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * address comments and fix issues Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * Move logger construction code Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * Add docs Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * async handlers Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * new picture Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> --------- Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> * fix(mlflow.py): fix ruff linting errors * ci(config.yml): add mlflow to ci testing * fix: fix test * test: fix test * Litellm key update fix (#6710) * fix(caching): convert arg to equivalent kwargs in llm caching handler prevent unexpected errors * fix(caching_handler.py): don't pass args to caching * fix(caching): remove all *args from caching.py * fix(caching): consistent function signatures + abc method * test(caching_unit_tests.py): add unit tests for llm caching ensures coverage for common caching scenarios across different implementations * refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one * fix(router.py): drop redis password requirement * fix(proxy_server.py): fix faulty slack alerting check * fix(langfuse.py): avoid copying functions/thread lock objects in metadata fixes metadata copy error when parent otel span in metadata * test: update test * fix(key_management_endpoints.py): fix /key/update with metadata update * fix(key_management_endpoints.py): fix key_prepare_update helper * fix(key_management_endpoints.py): reset value to none if set in key update * fix: update test ' * Litellm dev 11 11 2024 (#6693) * fix(__init__.py): add 'watsonx_text' as mapped llm api route Fixes https://github.com/BerriAI/litellm/issues/6663 * fix(opentelemetry.py): fix passing parallel tool calls to otel Fixes https://github.com/BerriAI/litellm/issues/6677 * refactor(test_opentelemetry_unit_tests.py): create a base set of unit tests for all logging integrations - test for parallel tool call handling reduces bugs in repo * fix(__init__.py): update provider-model mapping to include all known provider-model mappings Fixes https://github.com/BerriAI/litellm/issues/6669 * feat(anthropic): support passing document in llm api call * docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function * fix(factory.py): fix linting error * add clear doc string for GCS bucket logging * Add docs to export logs to Laminar (#6674) * Add docs to export logs to Laminar * minor fix: newline at end of file * place laminar after http and grpc * (Feat) Add langsmith key based logging (#6682) * add langsmith_api_key to StandardCallbackDynamicParams * create a file for langsmith types * langsmith add key / team based logging * add key based logging for langsmith * fix langsmith key based logging * fix linting langsmith * remove NOQA violation * add unit test coverage for all helpers in test langsmith * test_langsmith_key_based_logging * docs langsmith key based logging * run langsmith tests in logging callback tests * fix logging testing * test_langsmith_key_based_logging * test_add_callback_via_key_litellm_pre_call_utils_langsmith * add debug statement langsmith key based logging * test_langsmith_key_based_logging * (fix) OpenAI's optional messages[].name does not work with Mistral API (#6701) * use helper for _transform_messages mistral * add test_message_with_name to base LLMChat test * fix linting * add xAI on Admin UI (#6680) * (docs) add benchmarks on 1K RPS (#6704) * docs litellm proxy benchmarks * docs GCS bucket * doc fix - reduce clutter on logging doc title * (feat) add cost tracking stable diffusion 3 on Bedrock (#6676) * add cost tracking for sd3 * test_image_generation_bedrock * fix get model info for image cost * add cost_calculator for stability 1 models * add unit testing for bedrock image cost calc * test_cost_calculator_with_no_optional_params * add test_cost_calculator_basic * correctly allow size Optional * fix cost_calculator * sd3 unit tests cost calc * fix raise correct error 404 when /key/info is called on non-existent key (#6653) * fix raise correct error on /key/info * add not_found_error error * fix key not found in DB error * use 1 helper for checking token hash * fix error code on key info * fix test key gen prisma * test_generate_and_call_key_info * test fix test_call_with_valid_model_using_all_models * fix key info tests * bump: version 1.52.4 → 1.52.5 * add defaults used for GCS logging * LiteLLM Minor Fixes & Improvements (11/12/2024) (#6705) * fix(caching): convert arg to equivalent kwargs in llm caching handler prevent unexpected errors * fix(caching_handler.py): don't pass args to caching * fix(caching): remove all *args from caching.py * fix(caching): consistent function signatures + abc method * test(caching_unit_tests.py): add unit tests for llm caching ensures coverage for common caching scenarios across different implementations * refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one * fix(router.py): drop redis password requirement * fix(proxy_server.py): fix faulty slack alerting check * fix(langfuse.py): avoid copying functions/thread lock objects in metadata fixes metadata copy error when parent otel span in metadata * test: update test * bump: version 1.52.5 → 1.52.6 * (feat) helm hook to sync db schema (#6715) * v0 migration job * fix job * fix migrations job.yml * handle standalone DB on helm hook * fix argo cd annotations * fix db migration helm hook * fix migration job * doc fix Using Http/2 with Hypercorn * (fix proxy redis) Add redis sentinel support (#6154) * add sentinel_password support * add doc for setting redis sentinel password * fix redis sentinel - use sentinel password * Fix: Update gpt-4o costs to that of gpt-4o-2024-08-06 (#6714) Fixes #6713 * (fix) using Anthropic `response_format={"type": "json_object"}` (#6721) * add support for response_format=json anthropic * add test_json_response_format to baseLLM ChatTest * fix test_litellm_anthropic_prompt_caching_tools * fix test_anthropic_function_call_with_no_schema * test test_create_json_tool_call_for_response_format * (feat) Add cost tracking for Azure Dall-e-3 Image Generation + use base class to ensure basic image generation tests pass (#6716) * add BaseImageGenTest * use 1 class for unit testing * add debugging to BaseImageGenTest * TestAzureOpenAIDalle3 * fix response_cost_calculator * test_basic_image_generation * fix img gen basic test * fix _select_model_name_for_cost_calc * fix test_aimage_generation_bedrock_with_optional_params * fix undo changes cost tracking * fix response_cost_calculator * fix test_cost_azure_gpt_35 * fix remove dup test (#6718) * (build) update db helm hook * (build) helm db pre sync hook * (build) helm db sync hook * test: run test_team_logging firdst --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com> Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de> * test: update test * test: skip anthropic overloaded error * test: cleanup test * test: update tests * test: fix test * test: handle gemini overloaded model error * test: handle internal server error * test: handle anthropic overloaded error * test: handle claude instability --------- Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com> Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de> --------- Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: Jongseob Jeon <aiden.jongseob@gmail.com> Co-authored-by: Camden Clark <camdenaws@gmail.com> Co-authored-by: Rasswanth <61219215+IamRash-7@users.noreply.github.com> Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com> Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de>
This commit is contained in:
parent
51ffe93e77
commit
ba28e52ee8
11 changed files with 128 additions and 8 deletions
|
@ -1,4 +1,4 @@
|
|||
# Anthropic SDK
|
||||
# Anthropic `/v1/messages`
|
||||
|
||||
Pass-through endpoints for Anthropic - call provider-specific endpoint, in native format (no translation).
|
||||
|
||||
|
|
|
@ -1132,6 +1132,7 @@ from .llms.AzureOpenAI.chat.gpt_transformation import AzureOpenAIConfig
|
|||
from .llms.hosted_vllm.chat.transformation import HostedVLLMChatConfig
|
||||
from .llms.deepseek.chat.transformation import DeepSeekChatConfig
|
||||
from .llms.lm_studio.chat.transformation import LMStudioChatConfig
|
||||
from .llms.lm_studio.embed.transformation import LmStudioEmbeddingConfig
|
||||
from .llms.perplexity.chat.transformation import PerplexityChatConfig
|
||||
from .llms.AzureOpenAI.chat.o1_transformation import AzureOpenAIO1Config
|
||||
from .llms.watsonx.completion.handler import IBMWatsonXAIConfig
|
||||
|
|
54
litellm/llms/lm_studio/embed/transformation.py
Normal file
54
litellm/llms/lm_studio/embed/transformation.py
Normal file
|
@ -0,0 +1,54 @@
|
|||
"""
|
||||
Transformation logic from OpenAI /v1/embeddings format to LM Studio's `/v1/embeddings` format.
|
||||
|
||||
Why separate file? Make it easy to see how transformation works
|
||||
|
||||
Docs - https://lmstudio.ai/docs/basics/server
|
||||
"""
|
||||
|
||||
import types
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
from litellm import LlmProviders
|
||||
from litellm.secret_managers.main import get_secret_str
|
||||
from litellm.types.utils import Embedding, EmbeddingResponse, Usage
|
||||
|
||||
|
||||
class LmStudioEmbeddingConfig:
|
||||
"""
|
||||
Reference: https://lmstudio.ai/docs/basics/server
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
) -> None:
|
||||
locals_ = locals()
|
||||
for key, value in locals_.items():
|
||||
if key != "self" and value is not None:
|
||||
setattr(self.__class__, key, value)
|
||||
|
||||
@classmethod
|
||||
def get_config(cls):
|
||||
return {
|
||||
k: v
|
||||
for k, v in cls.__dict__.items()
|
||||
if not k.startswith("__")
|
||||
and not isinstance(
|
||||
v,
|
||||
(
|
||||
types.FunctionType,
|
||||
types.BuiltinFunctionType,
|
||||
classmethod,
|
||||
staticmethod,
|
||||
),
|
||||
)
|
||||
and v is not None
|
||||
}
|
||||
|
||||
def get_supported_openai_params(self) -> List[str]:
|
||||
return []
|
||||
|
||||
def map_openai_params(
|
||||
self, non_default_params: dict, optional_params: dict
|
||||
) -> dict:
|
||||
return optional_params
|
|
@ -1131,7 +1131,6 @@ class KeyManagementSettings(LiteLLMBase):
|
|||
"""
|
||||
If True, virtual keys created by litellm will be stored in the secret manager
|
||||
"""
|
||||
|
||||
prefix_for_stored_virtual_keys: str = "litellm/"
|
||||
"""
|
||||
If set, this prefix will be used for stored virtual keys in the secret manager
|
||||
|
|
|
@ -280,6 +280,22 @@ def allowed_routes_check(
|
|||
return False
|
||||
|
||||
|
||||
def allowed_route_check_inside_route(
|
||||
user_api_key_dict: UserAPIKeyAuth,
|
||||
requested_user_id: Optional[str],
|
||||
) -> bool:
|
||||
ret_val = True
|
||||
if (
|
||||
user_api_key_dict.user_role != LitellmUserRoles.PROXY_ADMIN
|
||||
and user_api_key_dict.user_role != LitellmUserRoles.PROXY_ADMIN_VIEW_ONLY
|
||||
):
|
||||
ret_val = False
|
||||
if requested_user_id is not None and user_api_key_dict.user_id is not None:
|
||||
if user_api_key_dict.user_id == requested_user_id:
|
||||
ret_val = True
|
||||
return ret_val
|
||||
|
||||
|
||||
def get_actual_routes(allowed_routes: list) -> list:
|
||||
actual_routes: list = []
|
||||
for route_name in allowed_routes:
|
||||
|
|
|
@ -26,7 +26,6 @@ from litellm.proxy._types import (
|
|||
# NOTE: This is the prefix for all virtual keys stored in AWS Secrets Manager
|
||||
LITELLM_PREFIX_STORED_VIRTUAL_KEYS = "litellm/"
|
||||
|
||||
|
||||
class KeyManagementEventHooks:
|
||||
|
||||
@staticmethod
|
||||
|
|
|
@ -39,7 +39,10 @@ from litellm.proxy._types import (
|
|||
UpdateTeamRequest,
|
||||
UserAPIKeyAuth,
|
||||
)
|
||||
from litellm.proxy.auth.auth_checks import get_team_object
|
||||
from litellm.proxy.auth.auth_checks import (
|
||||
allowed_route_check_inside_route,
|
||||
get_team_object,
|
||||
)
|
||||
from litellm.proxy.auth.user_api_key_auth import _is_user_proxy_admin, user_api_key_auth
|
||||
from litellm.proxy.management_helpers.utils import (
|
||||
add_new_member,
|
||||
|
@ -1280,10 +1283,8 @@ async def list_team(
|
|||
prisma_client,
|
||||
)
|
||||
|
||||
if (
|
||||
user_api_key_dict.user_role != LitellmUserRoles.PROXY_ADMIN
|
||||
and user_api_key_dict.user_role != LitellmUserRoles.PROXY_ADMIN_VIEW_ONLY
|
||||
and user_api_key_dict.user_id != user_id
|
||||
if not allowed_route_check_inside_route(
|
||||
user_api_key_dict=user_api_key_dict, requested_user_id=user_id
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=401,
|
||||
|
|
|
@ -2385,6 +2385,16 @@ def get_optional_params_embeddings( # noqa: PLR0915
|
|||
)
|
||||
final_params = {**optional_params, **kwargs}
|
||||
return final_params
|
||||
elif custom_llm_provider == "lm_studio":
|
||||
supported_params = (
|
||||
litellm.LmStudioEmbeddingConfig().get_supported_openai_params()
|
||||
)
|
||||
_check_valid_arg(supported_params=supported_params)
|
||||
optional_params = litellm.LmStudioEmbeddingConfig().map_openai_params(
|
||||
non_default_params=non_default_params, optional_params={}
|
||||
)
|
||||
final_params = {**optional_params, **kwargs}
|
||||
return final_params
|
||||
elif custom_llm_provider == "bedrock":
|
||||
# if dimensions is in non_default_params -> pass it for model=bedrock/amazon.titan-embed-text-v2
|
||||
if "amazon.titan-embed-text-v1" in model:
|
||||
|
|
|
@ -942,3 +942,12 @@ def test_forward_user_param():
|
|||
)
|
||||
|
||||
assert optional_params["metadata"]["user_id"] == "test_user"
|
||||
|
||||
def test_lm_studio_embedding_params():
|
||||
optional_params = get_optional_params_embeddings(
|
||||
model="lm_studio/gemma2-9b-it",
|
||||
custom_llm_provider="lm_studio",
|
||||
dimensions=1024,
|
||||
drop_params=True,
|
||||
)
|
||||
assert len(optional_params) == 0
|
||||
|
|
|
@ -387,3 +387,31 @@ def test_is_api_route_allowed(route, user_role, expected_result):
|
|||
pass
|
||||
else:
|
||||
raise e
|
||||
|
||||
|
||||
from litellm.proxy._types import LitellmUserRoles
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"user_role, auth_user_id, requested_user_id, expected_result",
|
||||
[
|
||||
(LitellmUserRoles.PROXY_ADMIN, "1234", None, True),
|
||||
(LitellmUserRoles.PROXY_ADMIN_VIEW_ONLY, None, "1234", True),
|
||||
(LitellmUserRoles.TEAM, "1234", None, False),
|
||||
(LitellmUserRoles.TEAM, None, None, False),
|
||||
(LitellmUserRoles.TEAM, "1234", "1234", True),
|
||||
],
|
||||
)
|
||||
def test_allowed_route_inside_route(
|
||||
user_role, auth_user_id, requested_user_id, expected_result
|
||||
):
|
||||
from litellm.proxy.auth.auth_checks import allowed_route_check_inside_route
|
||||
from litellm.proxy._types import UserAPIKeyAuth, LitellmUserRoles
|
||||
|
||||
assert (
|
||||
allowed_route_check_inside_route(
|
||||
user_api_key_dict=UserAPIKeyAuth(user_role=user_role, user_id=auth_user_id),
|
||||
requested_user_id=requested_user_id,
|
||||
)
|
||||
== expected_result
|
||||
)
|
||||
|
|
|
@ -3469,6 +3469,7 @@ async def test_key_generate_with_secret_manager_call(prisma_client):
|
|||
"""
|
||||
from litellm.secret_managers.aws_secret_manager_v2 import AWSSecretsManagerV2
|
||||
from litellm.proxy._types import KeyManagementSystem, KeyManagementSettings
|
||||
|
||||
from litellm.proxy.hooks.key_management_event_hooks import (
|
||||
LITELLM_PREFIX_STORED_VIRTUAL_KEYS,
|
||||
)
|
||||
|
@ -3517,6 +3518,7 @@ async def test_key_generate_with_secret_manager_call(prisma_client):
|
|||
await asyncio.sleep(2)
|
||||
|
||||
# read from the secret manager
|
||||
|
||||
result = await aws_secret_manager_client.async_read_secret(
|
||||
secret_name=f"{litellm._key_management_settings.prefix_for_stored_virtual_keys}/{key_alias}"
|
||||
)
|
||||
|
@ -3537,6 +3539,7 @@ async def test_key_generate_with_secret_manager_call(prisma_client):
|
|||
await asyncio.sleep(2)
|
||||
|
||||
# Assert the key is deleted from the secret manager
|
||||
|
||||
result = await aws_secret_manager_client.async_read_secret(
|
||||
secret_name=f"{litellm._key_management_settings.prefix_for_stored_virtual_keys}/{key_alias}"
|
||||
)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue