fix: update tests for OpenAI-style models endpoint

The llama-stack-client now uses /v1/openai/v1/models which returns OpenAI-compatible model objects with 'id' and 'custom_metadata' fields instead of the Resource-style 'identifier' field. Updated api_recorder to handle the new endpoint and modified tests to access model metadata appropriately. Deleted stale model recordings for re-recording.
This commit is contained in:
Ashwin Bharambe 2025-11-03 16:16:16 -08:00
parent 4a5ef65286
commit 809dae01c2
24 changed files with 823 additions and 6697 deletions

View file

@ -160,7 +160,7 @@ def client_with_models(
providers = [p for p in client.providers.list() if p.api == "inference"]
assert len(providers) > 0, "No inference providers found"
model_ids = {m.identifier for m in client.models.list()}
model_ids = {m.id for m in client.models.list()}
if text_model_id and text_model_id not in model_ids:
raise ValueError(f"text_model_id {text_model_id} not found")