mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-11 19:56:03 +00:00
Goal is to consolidate model listing endpoints. This is step 1: adding custom_metadata field to OpenAIModel that includes model_type, provider_id, provider_resource_id, and all model metadata from the native /v1/models response. Next steps: update stainless client to use /v1/openai/v1/models, migrate tests to read from custom_metadata, then remove /v1/openai/v1/ prefix entirely. |
||
|---|---|---|
| .. | ||
| routers | ||
| routing_tables | ||
| test_api_recordings.py | ||
| test_context.py | ||
| test_distribution.py | ||
| test_library_client_initialization.py | ||
| test_stack_list_deps.py | ||